Fig. 1: Leonardo da Vinci's Vitruvian Man. (Source: Wikimedia Commons) |
Despite the advances in semiconductor fabrication that have allowed for chips with billions of transistors, technology has not successfully replicated the computational power of the mammalian brain. (See Fig. 1) Our brains significantly outperform even modern supercomputers both in speed and performance in tasks of vision, audition, learning, and pattern recognition; thriving in real-time even when provided poorly-conditioned inputs. [1] While modern computers employ transistors switching on the order of 109 times per second, the brain's computational units (mainly neurons) signal at a max of about ten hertz. The biological processing systems outperform those of computers despite being run on slow, stochastic, and inhomogeneous units. [2,3] In addition, the brain performs such impressive computation with remarkable efficiency. As compared to a room-sized supercomputer the human brain is more efficient in terms of space, weight, and power consumption by respective orders of roughly three, four, and six. [2]
Information processing and manipulation in the nervous system is not fully understood. However, further efforts to understand the fundamental differences between computation in brain and computer could not only improve our understanding of our own biology, but could inspire a new wave of efficient, high performing technologies.
With regards to processing and communicating information, neurons are the predominant cells of the nervous system. Like the computational units of computers (ie. logic gates), neurons function as parts of larger circuits and networks. A neuron can be thought of as having an input and output region. They both signal and listen to each other and the peripheral systems of the body. Communication to and from a neuron is performed by the release of chemical compounds called neurotransmitters which passively migrate and bind protein receptors on the input regions of the cell.
The input region, which includes the cell body and branchlike structures called dendrites, combines the chemical information from many connecting neurons and generates a voltage across the cell membrane. Membrane voltages in the soma and dendrites passively propagate down the membrane and ultimately to a region of the cell called the axon-hillock. The axon-hillock considers the polarity and magnitude of the voltage against a threshold value. If the threshold is met, an actively propagated voltage (called a spike or action potential) travels down the wire-like component of the cell called the axon. The axon leads to the output region of the cell where the voltage is then converted back to a chemical signal as neurotransmitters are released. This space between input and output regions of neurons (and potentially other cells) where the chemical signals are released and have their effects is called the synapse. A synapse can be shared by a large number of neurons. A single action potential and subsequent neurotransmitter release are likely insufficient to generate an action potential in the next synapse-sharing cell. Instead neurotransmitter release can be summed temporally (i.e. the same neuron firing repeatedly) or spatially (ie. multiple synapse-sharing neurons firing roughly at the same time). This provides some insight into the stochasticity and parallelism of networks in the brain.
The communication is digital as once the threshold voltage is met, an action potential will be generated. Information is relayed by whether or not the spike happens (two states). Computation, which is analog, is performed on a cellular level as the voltages from chemical inputs are summed and weighed from different parts of the input region.
Throughout the cell voltages are set by intra and extracellular concentrations of charged ions. Protein ionic pumps and channels modulate the concentrations gradients; neurotransmitter binding or sufficient voltage in the nearby membrane (characteristic of ion pumps in the axon) induces these pumps to pull in or extrude ions. The ionic pumps are vital to the digital voltage spike and demand a substantial portion of the cells energy. In an attempt to reduce noise, they work hard to maintain concentration gradients around the axon to make it clear both when the cell is firing and when it is not. Although chemicals are cleared from the synapse to minimize unintentional excitation, noise in the input region is more prevalent. There is plenty of room for interference as the densely packed neurons could chemically or electrically influence their each other.
The networks of neurons are plastic. Although new neurons do not form in most regions of the brain, threshold voltages can be altered and new connections (synapses) can be created, strengthened or weakened. This plasticity allows for networks to be influenced by their environment, and is the basis for emergent animal behaviors of learning and adaption. [4]
|
||||||||||||||||||||
Table 1: Some of the major differences of computation as performed in the mammalian brain and modern computer. (Source: G. Vega, after the SIA. [5]) |
Unlike the brain, computation in the modern computer is performed predominantly in the digital realm. The units of computation operate on Boolean logic and are comprised of transistors operating as binary switches. As is the case in the nervous system, these units communicate (relay the results of computation) digitally.
The regions of the brain are highly specialized. Circuits and networks are customized for certain types of inputs and functions (ie. motor control). Neuronal projections (axons) are sent throughout the brain but are mostly communicative. Computers rely on general methods that are then specialized on the software level. In comparison, the brain is developed and customized for much more localized computation down to the hardware-equivalent level (wetware).
Although logic gates are orders of magnitude smaller and many orders of magnitude quicker, speed and density of information in the brain are high even compared to computers. On a hardware/wetware level, this is achieved by utilizing massive amounts of parallelism and high neuron synaptic connectivity. The network/circuit links activated at the synapse are comparable to digital instructions. Synaptic activity is 1016 neural connections per second. It would take hundreds of thousands of desktop computers and many megawatts to match such rates. [2]
The nervous system is spontaneous and does not rely on coded instructions. For example, a stimulus initiates a set of excitations that then propagate through networks. The algorithm-equivalent in the brain is provided by the plastic interactions and connections at the synapse that then modulate stimulus response (these "algorithms" are not deterministic).
There are areas of computer science and math (ie. artificial intelligence/neural networks) that focus on emulating the brain's computation from a software perspective. However, learning and adaption are accomplished even at a hardware-equivalent level of the brain via synaptic plasticity. This plasticity of networks underlies the brains ability to learn, adapt and even customize and self-organize. This also ties in the comparison that brain does not separate computation and memory as computers do; allowing for the influence of experiences on many levels. These comparisons (and more) are summarized in Table 1 and the SIA "Call to Action". [5]
The brain has evolved under clear constraints of energy and space as the living organism to which the nervous system developed alongside would need to support it without sacrificing reproductive success. This efficiency is suggested to be explained by neurobiological systematic use of analog computation, local wiring and computational strategies, and abilities to adapt and learn. [6]
With regards to optimizing energy efficiency, the decision of whether or not analog or digital is preferable is case-dependent. For low precision processing analog has the edge in computational energy-efficiency (operations per joule) while at high precision, drawbacks such as susceptibility to noise prevent it from being the better option. [6] In part due to the power of learning and adaptability, the brain excels in such low precision settings relying on relative values of analog signals. Adaptive techniques and learning are then used to correct for noise and many other process imperfections that are magnified by the reliance on analog computation. [1]
Analog computation represents exploitation of the "physics of the medium" in which the brain was built. [6] Analog is advantageous where there is a reasonable mapping between the primitives of the system (ie. how peripheral systems physically respond to stimuli) and the operations needed in the computation. As one of many examples, sounds cause vibrations in the inner ear which are then processed directly without the need for intermediate digital conversion. The bypassing of analog to digital conversion in biological systems allows for improved energy efficiency. For more details on the implications of analog and digital computation and communication applied to neurobiology and silicon technology see Sarpeshkar. [6]
The brain's incredible operational efficiency has also been suggested to be explained by its organization. One key feature of the brain that allows for this efficiency is its ability to be "customized for the task at hand" down to hardware-equivalent level. [2] The example of audition also illustrates how the nervous system localizes and customizes computation. In response to vibrations in the inner ear, most of processing is done on-site then sent to specific brain regions for higher level interpretation. This flow involves information passed on to successive structures that can themselves also be specialized and optimized for the specific class of tasks. Interestingly, the customization of the brain is not fully determined by genetics. (See Boahen for a description of neural development, specialization, and organization and the silicon technologies trying to emulate them. [2])
The field of neuromorphic engineering looks to create computational devices and chips inspired by the brain and nervous system. Although transistors are used as switches in digital circuits, they have different regions of operation that demonstrate their analog nature. Their current and voltage characteristics are similar to those of the voltage dependent protein ion channels in neurons. Utilizing the analog capabilities of semiconductor technologies allows for defining the elementary operations of computation in terms of device physics. [1] This takes advantage of medium in which circuits are built in a similar way to what is done in the nervous system. Silicon is then a medium in which the organization and design of the brain can be imitated. Neuromorphic chips such as silicon retinas and cochleae have been produced in an academic setting and have demonstrated considerable power savings to their digital equivalents.
Neuromorphic circuits have been designed and incorporated with digital and analog very large-scale integrated (VLSI) technologies. [7] This has created the potential for modern nanoscale fabrication advances to work in conjunction with brain-inspired design. There are many complications to using transistors in analog circuits such as inhomogeneity, lack of precision, and susceptibility to noise. These have largely been controlled in the brain by design and adaptive techniques explored but not yet fully implemented in silicon. The field looks to incorporate the concepts of specialization/customization, synaptic plasticity, adaptability, and learning down to the level of hardware. [2]
This is not to say that modern computers and the techniques they were built on do not have value. The great chess champion Garry Kasparov was defeated by IBM's Deep Blue supercomputer in 1997. The computer analyzed 200 million potential board moves per second while Kasparov considered three. [2] The computer used raw quantity to defeat quality. Quality can, however, be born out of quantity. Modern computers are able to keep track of and analyze incredible amounts information in a time frame with which humans cannot compete.
Modern computers and the mammalian brain succeed in such contrasting tasks in part because they are respectively tailored for different functions and inputs. While modern computers thrive on precision, the human brain is more attuned to interpreting information in a setting where there is no deterministic or right or wrong answer. Even in the world where neural circuits are successfully realized in silicon, digital computation will likely still have immense value. Regardless, understanding of the computational processes in the brain is an instrumental step in developing new paradigms for computing that not only improve computational capabilities but also efficiency.
© Gabriel Vega. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
[1] C. Mead, "Neuromorphic Electronic Systems," Proc. IEEE 78, 1629 (1990).
[2] K. Boahen, "Neuromorphic Microchips," Scientific American 292, No. 5, 56 (May, 2005).
[3] G. Indiveri and T. K. Horiuchi, "Frontiers in Neuromorphic Engineering," Front. Neurosci. 5, 118 (2011).
[4] M. R. Ahmed and B. K. Sujatha, "A Review on Methods, Issues and Challenges in Neuromorphic Engineering," IEEE 7322626, 2 Apr 15.
[5] "Rebooting the IT Revolution: A Call to Action", Semiconductor Industry Association, September 2015.
[6] R. Sarpeshkar, "Analog Versus Digital: Extrapolating From Electronics to Neurobiology," Neural Comput. 10, 1601 (1998).
[7] G. Indiveri, et al., "Neuromorphic Silicon Neuron Circuits," Front. Neurosci. 5, 73 (2011).
[8] C. S. Poon and K. Zhou, "Neuromorphic Silicon Neurons and Large-Scale Neural Networks: Challenges and Opportunities," Front. Neurosci. 5, 108 (2011).