Any neuroscience textbook will tell you that learning and memory in the brain happens through synaptic plasticity, where the strength of the connections between neurons is modified. However, it’s now clear that there are also non-synaptic forms of activity-dependent plasticity, where neurons change their intrinsic properties without altering the synapses at all. This can happen either on a full-cell (global) or compartmental (local) scale within the neuron.
One use for global regulation of a cell’s excitability might be to maintain a particular average firing rate following changes in the input patterns. It’s been shown that some input-deprived neurons change their properties to become easier to spike, so that when they do eventually receive input they fire more action potentials than they would have before (see ‘homeostatic plasticity’ mention at Scholarpedia entry for Intrinsic Plasticity).
On the other hand, although local dendritic changes within neurons had been shown possible (e.g. ref 2), no one had really pushed the idea to demonstrate a real computational function. In March just gone Jeff Magee’s lab tried to tackle this in a Nature paper (main ref).
In an earlier paper (ref 3) they used two-photon glutamate uncaging to excite multiple synapses on an oblique branch of a rat CA1 pyramidal cell. If enough inputs were activated (~20) in a short enough time period (~6ms), fast Na+ based dendritic spikes were initiated. At the soma this looks like non-linear summation (the EPSP from multiple inputs is bigger than the sum of the individual responses). The effect isn’t that big (see figure G below), but it is something. Also, the local voltage change is likely to be much greater due to the high input impedance out in the thin dendrites. This could trigger things like synaptic plasticity, or spikes in other dendritic branches.
Unfortunately, dendritic spikes on their own are no longer news enough to get you into Nature. What Losonczy and co. found was that the cell could regulate this spiking on a branch-by-branch basis. So I could have a dendritic branch that doesn’t ever spike, but by applying carbachol or pairing the dendritic spikes with somatic action potentials, I could turn my ‘weak’ branch into a ‘strong’ one (see figure below). They call this phenomenon branch-strength potentiation (BSP).
They also showed that BSP does not affect the individual synaptic responses, you get still get spikes even if you stimulate naive spines on the same branch, and neighbouring branches are not affected. This shows that it is something specific to the local dendritic membrane. Further experiments with knockout mice and pharmacology implied that BSP is mediated, at least in part, by a downregulation of A-type K+ channels (which would indeed render a branch more excitable).
Fair enough. To my theorist’s eye the data seem convincing – I believe in BSP (unlike ESP). My problem, however, lies in their conclusions. They claim that this is a plausible form of input feature storage for these (and maybe all other) neurons. In the supplemental info there is a schematic illustration of a scenario where this might happen in a real cell:
It’s worth looking through this figure. Parts a-f are supposed to form a little story. Part a shows a hypothetical CA1 pyramidal neuron with some weak daughter (blue) and strong parent (red) dendritic branches. The soma is the big white circle and there’s no axon shown. Parts b and c suppose that multiple inputs to the same weak dendrite (here suggested to be an array of CA3 place cells) might fire simultaneously, but because of the weak effect on somatic membrane potential would result in only poorly timed action potential output. However, if BSP is induced, the weak branch turns into a strong one (part d). The next time these same inputs arrive they cause a dendritic spike, which in turns triggers a reliable somatic action potential (part e). Hence all such CA1 pyramidal cells getting this type of input could ‘learn’ to robustly respond to certain stimulus features (part f).
So is this scenario likely to occur in a real animal? This question really strikes at the heart of the matter. Is BSP actually used by the brain or is it just an epiphenomenon? One issue is whether these dendritic spikes in this region actually occur in vivo. Losonczy et al estimate that to initiate a d-spike you need to activate about 20 individual synapses on a single branch, from a typical pool of maybe 200. That sounds feasible, but at the moment no-one really knows for sure. Also, d-spike initiation might be more difficult if there is a high level of background synaptic activity, which would make the membrane leakier, thus shunting any further synaptic input.
The next potential problem is the size of the effect. In part G of the first figure in this comment (above), we see that the non-linearity is rather weak. Of course, these measurements were made at the soma, so the effect may be quite marked at the dendrite itself but just gets attenuated as the signal travels to the soma. This might mean a huge voltage change in the dendrite, which could induce classic synaptic activity by releasing the magnesium block in synaptic NMDA receptors. They don’t report any evidence of this, but it would have been nice if it was at least explored a little, maybe through dendritic recordings or computational modelling. Regardless of what happens at the dendrite, since the amplitude of the effect at the soma is small it is hard to see how it could robustly control axonal spike timing (remember, the above figure is just a schematic).
A third complication is how to reconcile all of this with current models of pattern recognition. According to classic neural network theory, the ‘memory’ is stored solely in the synaptic weights. Although the individual synapses continue contribute similarly under this new scheme, their non-linear interaction is something that has not been explored much in the theoretical literature (although see work from Bartlett Mel’s lab, refs 4 & 5). One specific problem I can imagine comes from the fact that the dendritic spike doesn’t care which particular synapses initiated it (say any 20 of 200). Even if my dendrite had learned to spike because of one particular set of synchronous inputs, it would also interfere with any other ‘patterns’ stored in all the remaining synaptic weights on the same branch. It’s not clear how to handle this theoretically.
Overall, I think it’s great to see these kinds of ideas put forward by real experimentalists, even if they turn out not to be exactly correct. The huge flaw of most existing network models of learning and memory is that they ignore a lot of what we know about how real neurons operate. These inconveniences include the spatial extent of the dendritic tree, non-linear synaptic integration, and intrinsic plasticity. The days of the simple integrate-and-fire model of the neuron are long gone. A nice paper from Panayiota Poirazi’s lab (ref 6) summed it up when they said that “something smaller than the cell lies at the heart of neural computation”.
Attila Losonczy, Judit K. Makara, Jeffrey C. Magee (2008). Compartmentalized dendritic plasticity and input feature storage in neurons Nature, 452 (7186), 436-441 DOI: 10.1038/nature06725
- Intrinsic Plasticity.
Robert Cudmore, Niraj S Desai.
Scholarpedia (2008) 3(2):1363.
- LTP is accompanied by an enhanced local excitability of pyramidal neuron dendrites.
Andreas Frick, Jeffrey Magee, Daniel Johnston.
Nat Neurosci (2004) 7 (2), 126-35
- Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons.
Attila Losonczy, Jeffrey C Magee.
Neuron (2006) 50 (2), 291-307
- Pyramidal neuron as two-layer neural network.
Panayiota Poirazi, Terrence Brannon, Bartlett W Mel.
Neuron (2003) 37 (6), 989-99
- Computational subunits in thin dendrites of pyramidal cells.
Alon Polsky, Bartlett W Mel, Jackie Schiller.
Nat Neurosci (2004) 7 (6), 621-7
- Inside the brain of a neuron.
Kyriaki Sidiropoulou, Eleftheria Kyriaki Pissadaki, Panayiota Poirazi.
EMBO Rep (2006) 7 (9), 886-92