Overblog
Editer l'article Suivre ce blog Administration + Créer mon blog
6 mars 2013 3 06 /03 /mars /2013 07:12

With the President suggesting a multibillion-dollar neuroscience effort, a leading neuroscientist explains the deep conceptual problems with plans to record all the brain's neurons

The Sherlock Holmes novel The Hound of the Baskervilles features the great Grimpen Mire, a treacherous marsh in Dartmoor, England. Holmes’ protagonist, the naturalist Stapleton, knows where the few secure footholds are, allowing him to cross the mire and reach the hills with rare plants and butterflies, but he warns Dr. Watson that a false step can be fatal, the bog inexorably consuming the unsuspecting traveller. Trying to unravel the complexities of the brain is a bit like crossing the great Grimpen Mire: one needs to know where the secure stepping-stones are, and a false step can mean sinking into a morass. As we enter the era of Big Brain Science projects, it is important to know where the next firm foothold is.

As a goal worthy of a multi-billion dollar brain project, we have now been offered a motto that is nearly as rousing as “climb every mountain”: “record every action potential from every neuron.” According to recent reporting in the New York Times, this goal, proclaimed in a paper published in 2012, will be the basis of a decade-long “Brain Activity Map” project. Not content with a goal as lofty as this in worms, flies and mice, the press reports imply (and the authors also speculate) that these technologies will be used for comprehensive spike recordings in the human brain, generating a “Brain Activity Map” that will provide the answers to Alzheimers and Schizophrenia and lead us out of the “impenetrable jungles of the brain” that hapless neuroscientists have wandered over the past century.

Neuroscience is most certainly in need of integration, and brain research will without doubt benefit from the communal excitement and scaled up funding associated with a Big Brain Initiative. However, success will depend on setting the right goals and guarding against irrational exuberance. Successful big science projects are engineering projects with clear, technically feasible goals: setting a human on the moon, sequencing the Human Genome, finding the Higgs Boson. The technologies proposed in the paper under discussion may or may not be feasible in a given species (they will not be feasible in the normal human brain, since the methods involved are invasive and require that the skull be surgically opened). However, technology development is notoriously difficult to predict, and may carry unforeseen benefits. What we really need to understand is whether the overall goal is meaningful.

The fundamental problem with the goal of measuring every spike of every neuron is one of conceptual incoherence: the proposal does not stand up to theoretical scrutiny.

According to the paper, the reason we don’t yet understand how the brain works is that brain function depends on so-called “emergent properties” and that these “emergent properties” can only be studied by recording all spikes from all neurons in the brain. “Emergent property” is a troublesome phrase and it is not exactly clear what it means, but the authors also point to correlated or collective behavior of neurons, and to phenomena from physics in which collective behavior plays a role. Further, the authors imply that this correlated or collective behavior cannot be deduced from other levels of observation (including the circuitry), hence the imperative need for the “measure every spike” project.

What is wrong with this picture? First, brains do not exist in isolation. Spikes are driven by two sources: the intrinsic dynamics of the neuronal network, and external stimuli. Even if one recorded all spikes from all neurons (and for the entire life span of the organism), to make any sense of the data one would have to simultaneously record all external stimuli, and all aspects of behavior. It gets worse: there will be individual variation among animals, and each animal will have a different environmental history. The “comprehensive” measurement exercise would extend ad infinitum.

One could moderate the number of neurons being recorded from, control the environmental variables, and so on — and then one has returned to the realm of what neuroscientists are doing in any case and we have a specialized technology development project, not a moon shot or a genome project. Still, to understand whether one should focus all energies on greatly increasing the number of neurons being recorded from, we need to answer the theoretical question of what we gain by recording every neuron. If we cannot successfully argue that comprehensive neuronal recordings solve all our problems, then partial observations certainly won’t.

One is not really interested in the particulars of a given animal’s history of all spikes in the brain: one is interested in characterizing thepotential dynamics of neurons, under all possible circumstances. This is the well-known “competence/performance” distinction from linguistics. Supposing we record all English sentences spoken by someone, and then play that recording back. No one would say that the tape recorder knew English, even though it repeated the same performance.  From a scientific perspective we want to know what the brain is capable of doing in principle, not what it actually does in a specific instance. In other words, we want to understand the laws of brain dynamics, not the details of brain dynamics.

Here is the rub: what sets the laws of the neural network? Well, it is precisely the circuit connections and the physiology of single neurons that the authors have dismissed. The paper would focus all resources into multi-neuron recordings, without any plan to complete the outstanding task of mapping out the anatomical circuitry, itself a huge project, which we have only begun to seriously address and which provides a much closer analog to the Genome project. The physiological properties of neurons depend on carefully studying individual cells or pairs of cells, also not something that is on the agenda. Once the circuit and cellular physiology is known, we can in principle derive the pattern of every spike from every neuron, under every environmental stimulus. Network structure and cellular physiology determine the dynamical laws governing the neurons, and therefore drive the spiking activity. Positing an “emergent level” of spiking activity that cannot even in principle be predicted from the circuits, physiology, and inputs, is a form of mind-body dualism, that is no longer part of scientific thinking, along with vitalism, the idea that there is a separate “life force” that cannot be reduced to the molecular biology of the cell

In fact, this is what the recently funded (and controversial) multi-billion dollar European project is geared towards. The Europeans plan to build a comprehensive simulation of human brain activity, starting from details of individual neurons and micro-circuits.  The only problem there is that they don’t actually have the necessary circuits or physiological information (or are extrapolating from the rodent somatosensory cortex to the human). The way to resolve this is not to measure every spike from every neuron, but to map circuit connectivity and measure cellular physiology. It is this recognition that has led us to propose and commence on the project of mapping out mouse brain circuits, a task that is already enormous and will require many more resources to complete.

Let us now return to the fallacious argument that in order to study the collective dynamics of the neuronal network one must record all neurons. The paper exhibits a curious theoretical disconnect: on one hand the authors point to collective phenomena in physics, and on the other hand they forget the basic lesson we have learned from physics: that as far as collective or thermodynamic behavior goes, the full detailed microscopic behavior of the system does not matter. Only some very limited aspects of the microscopic dynamics filter out to the larger length and time scales: systems exhibit “universal” behaviors independent of much microscopic detail.

The implication in the paper is that measuring every spike will better enable the discovery of collective phenomena for brains. That is the precise opposite of the discovery and study of collective phenomena in physics. The study of macroscopic behavior, e.g. thermodynamics, came before a detailed understanding of the microscopic dynamics. Statistical mechanics provides bridges to microscopic dynamics in terms of statistical descriptions, not detailed dynamical descriptions. This same is true for other collective phenomena like magnetism, superconductivity and superfluidity, as exemplified by the famous Landau theories. In each case, the phenomena were first discovered at the macroscopic level, studied at the macroscopic level, and even the theoretical framework was established at the macroscopic level; the microscopic measurements and statistical mechanical theories entered at a later stage to refine the understanding already established.

It is unlikely that we will discover analogs of superconductivity or superfluidity in the brain by measuring every spike from every neuron. Analogs already exist and are being already studied at multiple scales of analysis. Animal behavior provides a close analog of the macroscopic behaviors of physical systems, reflecting the collective output of brains that actually matter for the survival of the organism. The study of psychological phenomena in terms of constructs such as memory, attention, language and affect also get at macroscopic properties of nervous system dynamics, and can be studied in their own right somewhat like the Landau theories in physics, although admittedly without the mathematical precision. Collective dynamics of neurons has long been studied in the form of electroencephalography (EEG). Over the last two decades many labs have gathered spiking data simultaneously from dozens to hundreds of neurons. This has not yet led to any tremendous new insight: in fact, much of the dynamics can be captured by the study of correlations between pairs of neurons.

Collective behavior in physics is associated with symmetry principles and conservation laws. For example, sound is a collective motion of fluid molecules. The macroscopic equations of motion of a fluid (the Navier Stokes equations) may be written down as consequences of the conservation of mass and of momentum. Linearization of these equations, gives rise to the wave equation, which describes sound. Note that one does not need to start from the microscopic dynamics of the fluid molecules. What is the nervous system equivalent? Not the symmetry principles important in physics (those still apply, but give you back physical phenomena, for example sound), but so called functional constraints – what the organism must be able to do in order to survive, and what shapes the nervous system through the evolutionary process.

This is related to the “computationalist” perspective, spelled out for the visual system by David Marr among others. This research program starts from the requirements the nervous system faces in order for the organism to survive, and tries to understand the neural circuits and activity from this perspective. “Function shapes form”; the deep principles to understand in physics are the symmetry laws, and in biology they are perhaps engineering principles and evolution. In addition to mapping nervous system architecture, one wants to understand what these principles are as they apply to brains. In order to understand brain dysfunction, one wants to understand the laws of normal function.

Will we get there faster by mapping circuits and physiology or by working on new multi-electrode technology? Ideally, one should not have to choose, as long as effort doesn’t get narrowly focused on conceptually ill-formed goals such as measuring every spike of every neuron (or simulating the human brain without adequate data). Much of this is not news to the practicing neuroscientist, but worth reminding ourselves as we navigate the new landscape of billion-dollar brain projects. Otherwise we risk the fate of naturalist Mr Stapleton as he rushed across the great Grimpen Mire at the conclusion of the Hound of the Baskervilles: even with all his knowledge and expertise, he stepped into a bog, and was not heard of again.

Partager cet article
Repost0

commentaires

H
Thank you so much guys for giving such kind of information. This will assist me a lot.
Répondre
O
Well that was a whole new topic to me and i am not aware of it. Thanks for such patience, in describing it in a better way. I am good enough to know about this proposal. Looking for more posts buddy.
Répondre
N
I've already been hearing to a lot of people discussing about this Map proposal activity related to the Brain testing policy. Ofcourse, the theory does make sense, but you've got to perform some practical experiences to get the best out of a theory. Thanks to the author for posting such a descriptive share.
Répondre
W
Quoted from wiki “Doyle said that the character of Sherlock Holmes was inspired by a surgeon at the Royal Infirmary of Edinburgh for whom Doyle had worked as a clerk. Chair of Medical Jurisprudence at the University of Edinburgh Medical School, is also cited as an inspiration for Holmes. Littlejohn served as a link between medical investigation and the detection of crime.” Great information, thanks.
Répondre
A
Devasabhathalam raagilamakuvan naada mayookhame swagatham....
Répondre