Ph.D., 2000, MIT (Computational Neuroscience)
New Research Building, Room WP-12
The lab investigates the computational mechanisms underlying human object recognition as a gateway to understanding the neural bases of intelligent behavior. The ability to recognize objects is a fundamental cognitive task in every sensory modality, e.g., for friend/foe discrimination, social communication, reading, or hearing, and its loss or impairment is associated with a number of neural disorders (e.g., in autism, dyslexia, or schizophrenia). Yet despite the apparent ease with which we see and hear, object recognition is widely acknowledged as a very difficult computational problem. It is even more difficult from a biological systems perspective, since it involves several levels of understanding, from the computational level, over the level of cellular and biophysical mechanisms and the level of neuronal circuits, up to the level of behavior.
In our work, we combine computational models (in particular the HMAX model of object recognition in cortex) with human behavioral, functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data. This comprehensive approach addresses one of the major challenges in neuroscience today, that is, the necessity to combine experimental data from a range of approaches in order to develop a rigorous and predictive model of human brain function that quantitatively and mechanistically links neurons to behavior. This is of interest not only for basic research, but also for the investigation of the neural bases of behavioral deficits in mental disorders. Understanding the neural mechanisms underlying object recognition in the brain is also of significant relevance for Artificial Intelligence, as the capabilities of pattern recognition systems in engineering (e.g., in machine vision or speech recognition) still lag far behind that of their human counterparts in terms of robustness, flexibility, and the ability to learn from few exemplars. Finally, a mechanistic understanding of the neural processes endowing the brain with its superior object recognition abilities opens the door to supporting and extending human cognitive abilities in this area through hybrid brain-machine systems (“augmented cognition”).
Most of the work is focused on the domain of vision, reflecting its status as the most accessible sensory modality. However, given that similar problems of specificity and invariance have to be solved in other sensory modalities as well (for instance in audition), it is likely that similar computational principles underlie processing in those domains, and we are interested in understanding commonalities and differences in processing between modalities. Current projects include research on multitasking, vibrotactile speech, and ultra-rapid object detection through cortical processing shortcuts.
"The findings not only help reveal how the brain processes words, but also provides insights into how to help people with reading disabilities, says Riesenhuber. 'For people who cannot learn words by phonetically spelling them out — which is the usual method for teaching reading — learning the whole word as a visual object may be a good strategy.'”