• Feature

Understanding how the brain makes sense of what we see

17 August 2023


In the Human Brain Project, researchers are using state-of-the-art measurements, analysis and modelling tools to advance our knowledge of the neural mechanisms underlying our senses, especially vision, which is responsible for a large part of the information we receive from our surroundings.

From the eyes to the brain

Aristotle regarded our senses as a portal to reality. The Greek philosopher believed that what we see and hear discloses a large part of reality, but humans still require their intellect to understand their environment. 19th century German scientist Hermann von Helmholtz argued that perception requires making predictions about the world based on pre-acquired knowledge. In the 21st century, scientists are still dealing with this question and are trying to better understand how the input from our senses is processed in the brain, enabling us to perceive and experience the world around us.

In a recent study, HBP scientists challenged our current understanding of how vision works. The team from the University of Amsterdam observed in mice how the brain’s mechanisms for vision depend on input from other senses as well (Oude Lohuis et al. 2022). They found that the time the brain needs to make a visual interpretation depends on auditory and tactile inputs, not only on the visual properties of the stimuli.

The activity of neurons in the visual cortex varied based on whether the animals had to report only what they saw or also what they heard or felt. In other words, brain signals indicating that visual stimuli had been detected took longer to appear if subjects were also paying attention to sounds or touch.

Previously, our understanding had been that the processing of a visual scene is mainly determined by the complexity of the scene itself. However, visual processing does not occur in isolation and is influenced by sound, touch, smell and other senses.
This multimodal view of sensory processing underlies a theory on brain mechanisms of perception and consciousness previously proposed by HBP researcher Cyriel Pennartz, one of the authors of this study.

According to this theory, conscious processing is jointly shaped by multiple senses in an overarching framework that characterises perception as the construction of best-guess representations of our surroundings, and has given rise to computer models built in the HBP.

This theoretical framework, called neurorepresentationalism, states that conscious experience is understood as a sensorily rich, spatially encompassing representation of body and environment, while we nevertheless have the impression of experiencing external reality directly (Pennartz 2022).

Mouse model of vision

Many scientists study mechanisms of vision in the mouse model, because the organisation of the mouse cortical visual system resembles that of primates, albeit with some significant differences.

The cortical visual systems of both mice and primates are organised into a primary visual area surrounded by a number of organised higher visual areas. The retinas of primates, however, contain a central region of maximum visual acuity, called the fovea, which is lacking in mice. Consequently, mice don’t have a fovea representation in their cortex.

Yet, previous studies have suggested that the mouse retina is not entirely uniform. In a recent study, HBP researchers from the Netherlands Institute of Neuroscience (NIN) used wide-field imaging and modelling to reveal that the mouse visual cortex actually contains a region of improved spatial resolution, which they called the “focea” (van Beest et al. 2021).

In other words, mice have a cortical specialisation which enhances processing of a particular region of a visual scene, which is located directly in front of and slightly above the mouse. In addition, the researchers found that mice, when exploring a visual scene, take advantage of this higher spatial resolution, by moving head and eyes to keep the focea at this location.

By demonstrating a previously unknown similarity between the visual areas of mice and primates, this study contributes important novel insights for this research field.

From biology to computing

Brain models can have a massive impact on artificial intelligence (AI). Since the brain processes images in a much more energy-efficient way than artificial networks, scientists take inspiration from neuroscience to create neural networks that use significantly less energy. HBP researchers from the Graz University of Technology recently trained a large-scale model of the primary visual cortex of the mouse to solve a number of visual tasks, with high accuracy and versatility (Chen et al. 2022).

This model provides the largest integration of anatomical details and neurophysiological data currently available for the primary visual area V1, which is the first cortical region to receive and process visual information. The model provides an unprecedented window onto the dynamics of visual processing in this brain area and shows how the brain’s visual processing capabilities can be reproduced. Scientists hope that such neural networks can serve as blueprints for visual processing in more energy-efficient neuromorphic hardware.

On the path to vision implants

Designing a device capable of helping blind people see has been a long-held dream of scientists – a recent study has shown that this is now in reach.

HBP researchers from the NIN are developing a brain implant that electrically stimulates the brain’s visual cortex with high precision. The team demonstrated that they could use the implant to successfully induce visual perception in monkeys (Chen et al. 2020). The device (see image) contains 1,000 micro-electrodes stimulating specific points of the visual cortex to induce “phosphenes”, small dot-like percepts in the visual field. The induced pixel-like dots from many electrodes can be used to create the perception of shapes.

Electrical stimulation of two areas of the brain that are important for visual perception.

In combination with a wearable camera, such implants can translate visual information directly into brain-induced visual experiences – a way of seeing for the blind. The method completely bypasses the eyes and the optic nerve, providing stimulation directly to the brain’s visual cortex. This means that, if someday this leads to a successful clinical device, it could help people who lost their vision after damages to the eyes or to pathways leading to the visual cortex.

On a smaller scale, the technological principle has already been transferred into human medicine. The NIN team participated in a joint European–American study where a prosthesis that included a small video camera mounted in a pair of glasses connected to a brain implant with 96 microelectrodes (see image on p. 29). This prosthesis enabled a 57-year-old blind woman to see simple shapes and letters after a training period in a small section of the visual field (Fernández et al. 2021). At the time of the experiments, she had been fully blind for 16 years.

A miniature camera connected to an eye tracker built into the frame of glasses transmits visual information to the brain implant.

With our understanding of visual processing in our brains constantly advancing, researchers are making strides towards next-level AI and clinical technologies. State-of-the-art experimental and modelling methods are giving scientists the tools to solve the mysteries of perception.

This text was first published in the booklet ‘Human Brain Project – A closer look at scientific advances’, which includes feature articles, interviews with leading researchers and spotlights on latest research and innovation. Read the full booklet here.

References

Chen X, Wang F, Fernandez E, Roelfsema PR (2020). Shape perception via a high-channel-count neuroprosthesis in monkey visual cortex. Science 370, 6521. doi: 10.1126/science.abd7435

Chen G, Scherr F, Maass W (2022). A data-based large-scale model for primary visual cortex enables brain-like robust and versatile visual processing. Sci. Adv. 8(44):eabq7592. doi: 10.1126/sciadv.abq7592

Fernández E, Alfaro A, Soto-Sánchez C, Gonzalez-Lopez P, Lozano AM, Pena S, Grima MD, Rodil A, Gómez B, Chen X, Roelfsema PR, … Normann RA (2021). Visual percepts evoked with an intracortical 96-channel microelectrode array inserted in human occipital cortex. J. Clin. Invest. 131(23):e151331. doi: 10.1172/JCI151331

Oude Lohuis MN, Pie JL, Marchesi P, Montijn JS, de Kock CPJ, Pennartz CMA, Olcese U (2022). Multisensory task demands temporally extend the causal requirement for visual cortex in perception. Nat. Commun. 13(1):2864. doi: 10.1038/s41467-022-30600-4

Pennartz CMA (2022). What is neurorepresentationalism? From neural activity and predictive processing to multi-level representations and consciousness. Behav. Brain Res. 432:113969. doi: 10.1016/j.bbr.2022.113969

van Beest EH, Mukherjee S, Kirchberger L, Schnabel UH, van der Togt C, Teeuwen RRM, Barsegyan A, Meyer AF, Poort J, Roelfsema PR, Self MW (2021). Mouse visual cortex contains a region of enhanced spatial resolution. Nat. Commun. 12(1):4029. doi: 10.1038/s41467-021-24311-5