Maximilian Riesenhuber, PhD

Maximilian Riesenhuber, PhD

Professor
PhD, 2000, MIT (Computational Neuroscience)
New Research Building, Room WP-12
Phone: 202.687.9198
Fax: 202.784.3562
E-mail: 
mr287@georgetown.edu

The lab investigates the computational mechanisms underlying human object recognition as a gateway to understanding the neural bases of intelligent behavior. The ability to recognize objects is a fundamental cognitive task in every sensory modality, e.g., for friend/foe discrimination, social communication, reading, or hearing, and its loss or impairment is associated with a number of neural disorders (e.g., in autism, dyslexia, or schizophrenia). Yet despite the apparent ease with which we see and hear, object recognition is widely acknowledged as a very difficult computational problem. It is even more difficult from a biological systems perspective, since it involves several levels of understanding, from the computational level, over the level of cellular and biophysical mechanisms and the level of neuronal circuits, up to the level of behavior.

In our work, we combine computational models with human behavioral, functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data. This comprehensive approach addresses one of the major challenges in neuroscience today, that is, the necessity to combine experimental data from a range of approaches in order to develop a rigorous and predictive model of human brain function that quantitatively and mechanistically links neurons to behavior. This is of interest not only for basic research, but also for the investigation of the neural bases of behavioral deficits in mental disorders. Understanding the neural mechanisms underlying object recognition in the brain is also of significant relevance for Artificial Intelligence, as the capabilities of pattern recognition systems in engineering (e.g., in machine vision or speech recognition) still lag far behind that of their human counterparts in terms of robustness, flexibility, and the ability to learn from few exemplars. Finally, a mechanistic understanding of the neural processes endowing the brain with its superior object recognition abilities opens the door to supporting and extending human cognitive abilities in this area through hybrid brain-machine systems (“augmented cognition”).

Most of the work is focused on the domain of vision, reflecting its status as the most accessible sensory modality. However, given that similar problems of specificity and invariance have to be solved in other sensory modalities as well (for instance in audition), it is likely that similar computational principles underlie processing in those domains, and we are interested in understanding commonalities and differences in processing between modalities. Current projects include research on neuromorphic learning, multitasking, vibrotactile speech, and plasticity in the auditory system.

Understanding How to Best Transform Speech into Tactile Vibrations Could Benefit Hearing-Impaired People

“In the past few years, our understanding of how the brain processes information from different senses has expanded greatly as we are starting to understand how brain networks are connected across different sensory pathways, such as vision, hearing and touch,” says Maximilian Riesenhuber, PhD, professor in the Department of Neuroscience at Georgetown University Medical Center and senior author of the study.

After Learning New Words, Brain Sees Them as Pictures

“The findings not only help reveal how the brain processes words, but also provides insights into how to help people with reading disabilities, says Riesenhuber. ‘For people who cannot learn words by phonetically spelling them out — which is the usual method for teaching reading — learning the whole word as a visual object may be a good strategy.’”

Riesenhuber

Why Sometimes, We Don’t See What We Actually Saw

Georgetown neuroscientists say they have identified how people can have a “crash in visual processing” — a bottleneck of feedforward and feedback signals that can cause us not to be consciously aware of stimuli that our brain recognized.

Making Sense of Senses: Researchers Find the Brain Processes Sight and Sound in the Same Two-Step Manner

Although sight is a much different sense than sound, Georgetown University Medical Center neuroscientists have found that the human brain learns to make sense of these stimuli in the same way. The researchers say in a two-step process, neurons in one area of the brain learn the representation of the stimuli, and another area categorizes that input so as to ascribe meaning to it — like first seeing just a car without a roof and then analyzing that stimulus in order to place it in the category of “convertible.”