I've been fascinated by cognition for as long as I can remember. Intelligence is the defining charactistic of humankind, and so mysterious that there remain active debates among philosophers as to whether our minds are governed by the same natural laws as our physical bodies.

I've always felt that, to understand intelligence, we should study the brain.


An artificial neural network consists of many interconnected units of computation that are modeled after the elecrically-activated cells in our brains and spinal cords. They looks kind of like this:

Neurons recieve inputs from other activated neurons, and when the weighted sum of those inputs exceeds a threshold, it becomes active itself.

Mathematically speaking, the input to a neuron is the linear function

$( f(x_0, x_1)=w_0 x_0 + w_1 x_1 )$


$( x_0, x_1 )$ are the outputs of the incoming neurons, and

$( w_0, w_1 )$ are how much each of those inputs is weighted.

And messes of mathematical symbols.

These two representations of neural networks are at two different levels of abstraction.