Review of ‘Searching for Collective Behavior in a Large Network of Sensory Neurons’

Last time I reviewed the principle of maximum entropy. Today I am looking at a paper which uses it to create a simplified probabilistic representation of neural dynamics. The idea is to measure the spike trains of each neuron individually (in this case there are around 100 neurons from a salamander retina being measured) and simultaneously. In this way, all correlations in the network are preserved, which allows the construction of a probability distribution describing some features of the network.

Naturally, a probability distribution describing the full network dynamics would need a model of the whole network dynamics, which is not what the authors are aiming at here. Instead, they wish to just capture the correct statistics of the network states. What are the network states? Imagine you bin time into small windows. In each window, each neuron will be spiking or not. Then, for each time point you will have a binary word with 100 bits, where each a 1 corresponds to a spike and a -1 to silence. This is a network state, which we will represent by $\boldsymbol{\sigma}$.

So, the goal is to get $P(\boldsymbol{\sigma})$. It would be more interesting to have something like $P(\boldsymbol{\sigma}_{t+1}|\boldsymbol{\sigma}_t)$ (subscript denoting time) but we don’t always get what we want, now do we? It is a much harder problem to get this conditional probability, so we’ll have to settle for the overall probability of each state. According to maximum entropy, this distribution will be given by $$P(\boldsymbol{\sigma})=\frac{1}{Z}\exp\left(-\sum_i \lambda_i f_i(\boldsymbol{\sigma})\right)$$ Continue reading “Review of ‘Searching for Collective Behavior in a Large Network of Sensory Neurons’”

Neuromorphic computing

Lately I’ve been giving some thought on quantitative measures for the brain’s processing power. Let us take the number of brain neurons as $10^{11}$ and synapses as $10^{14}$, according to wikipedia. I am going to assume a very simplified computational model for a neuron:

$$y_i = \phi\left( \sum_j w_{ij} x_j \right)$$

With $\phi$ an activation function such as a step function or a sigmoidal function. So every time a neuron spikes, a computation is being performed. From our previous numbers, the sum in the $j$ has around $10^3$ components on average, which means there are one thousand additions and one thousand multiplications performed per spike! Let’s consider each of these a floating point operation on a computer. Then we have 2000 flops (I will call flops the plural of a flop and flop/s flops per second, because the terminology is really confusing) per spike. Let us assume an action potential (a spike) for a neuron lasting 1ms as an upper bound. Then a neuron can spike up to 1000 times per second. Thus we have $1000*10^{11}*2000=2*10^{17}$ flop/s, or 200 petaflop/s.

That is only an order of magnitude higher than the fastest supercomputers at the time of writing. It seems at least a first approximation to the brain would be achievable in realtime in the next few years correct? Not with the current architectures. An obvious oversight such a simplistic calculation makes is the concurrency of the calculation. The 2000 calculations for a given spike are performed simultaneously in a neuron, whereas a traditional von neumann architecture would need to do these calculations in steps: first perform all the multiplications, then sum all the results to a single value. Even a massively parallel architecture would need one clock cycle for the multiplications, and 10 clock cycles to integrate all values (by summing them in pairs you need $log_2 (1000)$ clock cycles). The number of flop/s is the same, but you need to run your machine 11 times faster, which destroys power efficiency (you don’t want your brain consuming megawatts of power).

An even greater problem is the memory bandwidth. At each clock cycle, you need to move $10^{11}$ numbers to your computational cores, compute their new values and move them back. If each is a double we are in the order of 800GB/s each way just for the main operation (i.e. not counting any temporary variables for the sum reduction above discussed), which does seem out of reach of current supercomputers, and also not very power efficient.

The brain is not affected by these problems since the memory is an integral part of the computational infrastructure. In fact synaptic weights and connections are both the software and the memory of the brain’s architecture. Of course, the way information is encoded is not very well understood, and there are likely many mechanisms to do so. The wikipedia page on neural coding is quite interesting. In any case it is clear that synaptic weights are not enough to fully describe the brain’s architecture, as glial cells might also play a role in memory formation and neuronal connectivity. However, spike timing dependent plasticity (STDP, associated with Hebb’s rule) seems to be an adequate coarse grained description of how synapse weights are determined (at the time of writing).

With this in mind, memristors seem to be an appropriate functional equivalent to our coarse grained description of the brain. In a memristor, resistance is proportional to the intensity of the current that flows through it. Thus you can engineer a system where connections which are often used are reinforced. By combining memristive units with transistors, it is in principle possible to create an integrate and fire unit similar to a neuron. A device to emulate STDP could also be implemented. The biggest hurdle seems to be connection density. In a planar implementation, only 4 nearest neighbor connections can be implemented straightforwardly. To reach an order of 1000 connections (not necessarily with nearest neighbors) per unit, 3D structures will need to be used. At the current time however there are no promising techniques to enable the reliable construction of such structures. I foresee that self assembly will play a large role in this field, again taking heavy inspiration from the way nature does things.

In spite of these hurdles, I am excited. With progress in science getting harder each year, the only way to continue to discover nature’s secrets will be to enhance our cognitive capabilities be it through biological or electrical engineering.