25 October 2009

A complete cell-by-cell recording from a human brain

Two thought experiments have been bugging me lately. I've not made much progress on either one so I thought I'd write them down here and consider why they're interesting and maybe get some input.

The first thought is, what would be the value of a complete cell-by-cell recording of electrical activity from a human brain? Not sub-threshold resolution, but rather 100 billion channels recording the location and time (in milliseconds) of every action potential fired in the brain, as if an extracellular electrode had been wrapped around each and every axon. 30 billion channels (firing patterns) for the cortex, 10 billion channels for the subcortical forebrain (hippocampal region, basal ganglia and thalamus), 1 billion channels for the brainstem and spinal cord, and the rest for the grainy cerebellum. I asked this question on Mahalo Answers several months ago but nothing came of it.



What I'm really after I guess is the long-term purpose and potential of brain imaging (everything from fMRIs to neural networks cultured on multielectrode arrays). How would neuroscience change if brain imaging data was not limited to diffuse blobs, squiggly lines or a handful of individually recorded neurons? It seems clear that we would learn a tremendous amount, but what exactly?

One impulse is to model the data, but how would we do this without knowing the connectivity of the neurons? Seth (2005) and others would argue that we could work out causal, if not synaptic, connectivity through statistical analysis of the information-content of the firing pattern of each neuron relative to the others. If we had the computational resources to do that, what might we learn? People interested in evolutionary algorithms on the other hand might argue that we could use simple spiking model neurons (Izchikevich, 2003) and let synaptic weights and properties evolve until the model behaved and responded to input like the recorded brain did. (I just made that up, I have no idea if it makes sense. A caveat at this point is that I'm NOT a mathematician or anywhere near as familiar with neural network modelling as I'd like to be. I'm sure there are abstract functional rules and graph-theoretical relationships or whatever that one could retrieve from a complete cell-by-cell recording of the dynamics of a human brain. But I want them named. Control theory? OK, what might happen in control theory? Neural network software? OK, how would artificial neural networks change? Etc.)

A second impulse is to compare the complete cell-by-cell data with existing knowledge and models of the brain. We know a fair bit about gross anatomical and functional structure. We have a lot of rapid EEG and MEG activity which we don't understand but which seems relevant to various cognitive activities. We'd be able to learn an awful lot about what various states of heightened electrical (e.g. gamma buzz, hippocampal theta rhythm) or metabolic (e.g. BOLD-response) activity look like 'from the inside', on a cell-by-cell basis. But then what? What sort of understanding would we gain from turning a BOLD blob into the electrical chatter of millions of neurons?



A third impulse is to look for neural correlates of sensory, cognitive and behavioural events. The English language is composed of 40 or so phonemes (smallest units of sound), each of which presumably corresponds to a specific posture of the mouth and larynx. Each such posture should correspond to a more or less fixed pattern of activity in spinal C1 motorneurons and perhaps even in the motor cortex. It should therefore be possible to ask the brain/person to name an object and PREDICT the activity of some of its/his/her neurons from the word you expect it/him/her to utter. A similarly fixed pattern of activation would be induced by photons impinging on the retina, which might allow one to predict the activity of individual neurons in visual thalamus and the visual cortex. Similar predictions should be possible in the auditory system. We also know of a few deep brain regions where activity of individual neurons is predictable, such as the bursts of midbrain dopamine neurons in response to unexpected rewards. Would these fixed points of neural activity in combination with simple or complex tasks allow us to understand/predict a bit better the activity of the rest of the brain? Not sure where I'm going with this one but I think it's important.

Yet another impulse would be simply to ask the brain/person to describe its/his/her subjective experience/internal processes as cell-by-cell recordings in regions of interest are taking place. Many theories in cognitive and affective neuroscience could be directly applied ('Let's see, you're feeling hungry, a little bit anxious and quite indecisive, right? Not hungry? That's weird, your NPY neurons are quite active..'), and many new ones could be developed. Moreover, I strongly believe the cultural impact of people being able to see and play with visual reconstructions of subjective states and cognitive events would be enormous. Right? What kind of visual reconstructions? Of which states or events? How would people come in contact with them? Statistical analysis of 100s of neurons in the leech nerve cord allowed Briggman et al. (2005) to identify 'decision-making' neurons whose activity predicted which of two possible activity states the nervous system would adopt in response to a stimulus. Benjamin Libet continues to annoy the world with his decision-predicting EEG events which supposedly occur a few hundred milliseconds before awareness, and recent versions of the experiment claim successful decision-making forecasts as early as 10 seconds before consciousness. These experiments use crude scalp electrodes. What awesomely weird conclusions about human beings would a complete cell-by-cell recording generate?

And so on. I'll return later with the other thought experiment, which is a (somewhat) more realistic and practical version of the same question.

No comments: