The visual cortex is the sensory area of the brain responsible for the information processing that underlies our visual perceptions. The first part of the cortex that receives input from visual stimuli is called primary visual cortex, or V1. It is the most studied area of the visual cortex, and probably the most studied sensory area of the brain in general.
Since the Nobel prize works of Hubel and Wiesel, understanding V1 neurons in terms of edge detectors, many experimental findings has been collected over the years, shedding light on many structural and functional aspects of V1 and of its neurons.
Our understanding of V1 information processing is, however, currently limited in two ways:
1) the experimental findings differs in nature and give us a “fragmented” picture of how v1 works.
2) until recently, most modeling approaches aimed at explaining only a small number of phenomena discovered through experiments, most of the times based on the presentation of simple synthetic stimuli, highlighting specific aspects of information encoding, but not engaging neural responses in ecological settings and therefore unable to capture the richness of v1 information encoding.
In this talk, I’ll present the two computational approaches pursued my research group contributing to a better understanding of v1 neurons information encoding:
1) the development of deep learning models to predict neural responses and of tools to inspect them
2) the developments of large scale spiking neural networks simulations strongly constrained by biology that aim at achieving a cohesive understanding of v1 information encoding through an integrative approach of experimental findings