The human brain is a network of some 100 billion neurons. The generation, recognition and modulation of neuronal activity patterns by neural networks in the brain are physical manifestations of thoughts, feelings, actions and sensations. Efforts to understand, reproduce and communicate the activity and capacities of brain networks are hampered by the inaccessibility and extraordinary complexity of these networks. However, a general understanding of brains as dynamical systems has emerged.
Consider a brain consisting of N neurons with an activity state V indicating activity (1) or inactivity (0). At time t the state (X) of this brain is equal to the activity state Vi(t) of neuron i, where i = 1,2,...N labels the N neurons of the brain. Formally, we write
which simply means that at a given time (t) the state of the brain (X) is the sum of the activity or inactivity (V) of its N neurons (i). The neurons are said to be the state variables of the system. Thus, a brain consisting of 100 neurons has total of 100^100 possible states, e.g.
Possible brain states. Red indicates a change in V from the previous time point.
The 100^100 possible states of this brain are referred to as its state space. Given biologically plausible rules for neuronal activation however only a fraction of the 100^100 states are practically possible. For example, it is not practically possible for all the neurons in your brain to become simultaneously active, or for all neurons in one hemisphere to be active and all neurons in the other hemisphere inactive. Furthermore, only a fraction of all practically possible brain states will be expressed during the life of a brain. For example, although it is practically possible for your brain to learn and express brain states associated with the articulation of words in Swahili, in actual fact you probably will never express those states. Conversely, some classes of brain states may recur frequently (see below).
The order in which a brain expresses its various states is referred to as its trajectory through state space. In some brain networks the trajectory is rhythmic and continuous. For example, neurons in the Pre-Bötzinger complex drive breathing from the moment of birth to the moment of death, and can only be temporarily displaced from their oscillatory trajectory in state space. Networks driving episodic rhythmic behaviours such as chewing go into a particular oscillatory trajectory when the behaviour is expressed, but may also be quiescent for long periods of time, or express different trajectories that drive other behaviours involving the same muscle groups (e.g. speaking, licking, coughing).
Rhythmic trajectories (also referred to as neuronal oscillations or "limit cycles") through a 3D state space. After displacement (blue) the system (a) returns to its main trajectory or (b) switches to a different trajectory. Figure from Briggman & Kristan (2008) Multi-Functional Pattern Generating Circuits.
A region or path in state space that attracts nearby network trajectories is called an attractor or attractor basin. For example, consider making coffee in the morning: this behaviour is a precise trajectory through the state space of your brain, involving movement to the kitchen, location of appropriate equipment, pouring hot water into a cup etc. If you're like me, your brain will revolve in the basin of this attractor untill coffee is produced regardless of where you wake up, what the time is, what you dreamt etc. In other words, your brain tends to travel through the coffee-making attractor regardless of its starting point in state space in the morning. This is not to say that every behaviour necessarily corresponds to an attractor - by relying on environmental cues a neural network with just one attractor can express several problem solving states (Buckley et al., 2008) - but it is a very useful simplification for thinking about brain activity.
Attractor basins (green) and hills (red) in a 2D state space.
The brain state spaces discussed here have N dimensions.
Attractors are also useful for understanding sensory classification, memories and habits of thought. For example, sensory categories are thought to result from neural networks associated with aspects of a class of objects (e.g. sensory networks responding to the sight, bark or smell of dogs) being repeatedly activated together and thus linked through Hebbian plasticity. In other words, the networks form an attractor "of" the abstract concept 'dog', such that all nearby brain states (e.g. neurons in the auditory cortex responding to a bark, or neurons in the visual cortex responding to a furry tail) will tend to converge on the same attractor basin and classify as instances of 'dog'.
That's it for now. Topics for future blog posts: How are attractors continually formed and dissolved in the state space of brains and networks? How can we study neural network dynamics and attractors? How can we visualize them, quantify them and use them in computing and in our everyday understanding of ourselves and others?