- Feature
Learning from the brain to make AI more energy-efficient
04 September 2023
Energy consumption is one of the main problems facing modern computing. The Human Brain Project has tackled the efficiency issue – potentially changing how computers will be thought of and designed in the future.
As much as computing has progressed, a biological brain still vastly outperforms the fastest calculators in many ways, and with a fraction of the energy consumption. While the demand for computing power is steadily increasing, classical computers can only do so much to become more energy-efficient, due to the inherent principles of their design.
In contrast to power-hungry computers, brains have evolved to be energy-efficient. It is estimated that a human brain uses roughly 20 Watts to work – that is equivalent to the energy consumption of your computer monitor alone, in sleep mode. On this shoe-string budget, 80–100 billion neurons are capable of performing trillions of operations that would require the power of a small hydroelectric plant if they were done artificially.
Progress in neuromorphic technologies
Neuromorphic technologies transfer insights about the brain to optimise AI, deep learning, robotics and automation. Computing systems using this approach have become increasingly refined and are in development worldwide. Like the brain itself, neuromorphic computers hold the promise of processing information with high energy efficiency, fault tolerance and flexible learning ability.
In the Human Brain Project, teams of engineers and theoretical neuroscientists are focused on the engineering and development of neuromorphic devices, which use spiking artificial neurons to train neural networks to perform calculations, and generally take inspiration from the way human brains function. They have built Europe’s most powerful neuromorphic systems, BrainScaleS and SpiNNaker, which are both part of the HBP’s open research infrastructure EBRAINS.
The first system, BrainScaleS, is an experimental hardware that emulates the behaviour of neurons using analog electrical circuits, omitting energy-hungry digital calculations. It relies on individual events, called “spikes”, instead of a stream of continuous values used in most computer simulations. Neurons sending such electrical impulses sparsely to each other is a basic way of efficient signaling in the brain. Mimicking the way neurons calculate and transmit information between each other allows the BrainScaleS chips, now already in their second iteration, to perform very fast calculations while also reducing data redundancy and energy consumption. The large-scale BrainScaleS system is based at Heidelberg University.
The second system, SpiNNaker, is a massively parallel digital computer designed to support large scale models of brain regions in biological real time. The SpiNNaker neuromorphic computer is based at the University of Manchester. It runs spiking neural network algorithms through its 1,000,000 processing cores that mimic the way the brain encodes information and can be accessed as a testing station for new brain-derived AI algorithms (Furber & Bogdan 2022). At the same time, SpiNNaker has shown promise for developing small low-energy chips that can be used for robots and edge devices. In 2018, the German state of Saxony pledged support of 8 million Euro for the next generation of SpiNNaker, SpiNNaker2, which has been developed in a collaboration between the University of Manchester and TU Dresden within the HBP. SpiNNaker2 chips have since then gone into large-scale production with chip manufacturer GlobalFoundries.
A SpiNNaker2 computer system with 70,000 chips and 10 Million processing cores will be based at TU Dresden (also see p. 55). SpiNNaker2 has been chosen as one of the pilot projects of Germany’s Federal Agency for Disruptive Innovation, SPRIN-D. A first company for commercialisation, SpiNNcloud Systems, has been founded by the Dresden team.
With the hardware advancing, software is learning from the brain as well. By now, theoretical neuroscientists in the HBP have become highly proficient in developing algorithms that resemble brain mechanisms to a far larger extent than current AI.
Brain research and AI have always shared connections. The earliest versions of artificial neural networks in the 1950s were already based on rudimentary knowledge about our nerve cells. Today, these AI systems have become ubiquitous, but they still run into limitations: their training is extremely energy-hungry, and what they learn can break down in unexpected ways.
Using new insights into biological brain networks, software modelers in the HBP have developed the next generation of brain-derived algorithms. These brain algorithms with higher biological realism have recently proven in practice to massively bring down energy demand, especially when run on a neuromorphic system.
After a series of high-level breakthroughs by several HBP teams (Cramer et al. 2022, Göltz et al. 2021, Bellec et al. 2020), in 2022, a collaboration of HBP researchers at TU Graz together with Intel tested the power of algorithms to bring down energy demand using Intel’s Loihi Chip (also see p. 56). The results were an up to 16-fold decrease in energy demand compared to non-neuromorphic hardware (Rao et al. 2022).
A positive feedback loop
Importantly for the HBP and neuroscience in general, more powerful and efficient computing also accelerates brain research, generating a positive feedback loop between highly neuro-inspired computers and detailed brain simulations. In this way, mechanisms that have evolved in biological brains to make them adaptable and capable of learning can be mimicked in a neuromorphic computer so that they can be studied and better understood. This is what a team of HBP researchers at the University of Bern have achieved with so-called “evolutionary algorithms” (Jordan et al. 2021). The programmes they have developed search for solutions to given problems by mimicking the process of biological evolution through natural selection, promoting the ones most able to adapt. Traditional programming is a top-down affair; evolutionary algorithms, instead, arise from the process on their own. This could provide us with further insights into biological learning principles, improve research into synaptic plasticity and accelerate progress towards powerful artificial learning machines.
In the last few years, impressive neuromorphic breakthroughs have made tangible what was previously only theorised regarding the advantages of the technology. As the limitations of traditional AI and classical computers become more and more obvious, learning from the brain has emerged as one of the most powerful approaches for moving ahead.
This text was first published in the booklet ‘Human Brain Project – A closer look at scientific advances’, which includes feature articles, interviews with leading researchers and spotlights on latest research and innovation. Read the full booklet here.
References
Bellec G, Scherr F, Subramoney A, Hajek E, Salaj D, Legenstein R, Maass W (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nat. Commun. 11(1):3625. doi: 10.1038/s41467-020-17236-y
Cramer B, Billaudelle S, Kanya S, Leibfried A, Grübl A, Karasenko V, Pehle C, Schreiber K, Stradmann Y, Weis J, Schemmel J, Zenke F (2022). Surrogate gradients for analog neuromorphic computing. Proc. Natl. Acad. Sci. U. S. A. 119(4):e2109194119. doi: 10.1073/pnas.2109194119
Furber S, Bogdan P (eds.) (2020). SpiNNaker: A Spiking Neural Network Architecture. Boston-Delft: now publishers. doi: 10.1561/9781680836523
Göltz J, Kriener L, Baumbach A, Billaudelle S, Breitwieser O, Cramer B, Dold D, Kungl AF, Senn W, Schemmel J, Meier K, Petrovici MA (2021). Fast and energy-efficient neuromorphic deep learning with first-spike times. Nat. Mach. Intell. 3:823-835. doi: 10.1038/s42256-021-00388-x
Jordan J, Schmidt M, Senn W, Petrovici MA (2021). Evolving interpretable plasticity for spiking networks. eLife 10:e66273. doi: 10.7554/eLife.66273
Rao A, Plank P, Wild A, Maass W (2022). A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware. Nat. Mach. Intell. 4:467–479. doi: 10.1038/s42256-022-00480-w