Home News People Research Publications

How does our brain turn what falls upon our eyes into the rich meaningful experience that we perceive in the world around us?

The Laboratory of Cognitive Neurodynamics studies the neural basis of high level visual processing to help answer this question. Specifically, we examine the spatiotemporal dynamics of how neural activity reflects the stages of information processing and how information flow through brain networks responsible for visual perception. We are particularly interested in the dynamic neural representation of faces, bodies, objects, words, and social and affective visual images.

To accomplish our research goals we record electrophysiological brain activity from humans using both invasive (intracranial EEG; iEEG, in collaboration with Dr. Jorge Gonzalez-Martinez) and non-invasive (magnetoencephalography; MEG) measures. In conjunction with these millisecond scale recordings we use multivariate machine learning methods, network analysis, and advanced signal processing techniques to assess the information processing dynamics reflected in brain activity. Additionally, we use direct neural stimulation to examine how disrupting and modulating brain activity alters visual perception. This combination of modalities and analysis techniques allow us to ask fine-grained questions about neural information processing and information flow at both the scale of local brain regions and broadly distributed networks.

Our specific research interests falls into three broad topics:


Social and Affective Visual Perception

Face, Object, and Word Recognition

New Ways of Assessing Brain Network Interactions and Dynamics

Examples of body-to-face identity adaptation stimuli, including face morphs, gender specific bodies, and two adaptation trial sequences. (Ghuman et al., 2010)

We use information from a person's face, body, and motions to assess their actions, intentions, and emotional state. Visual social and affective perception relies on a neural circuit in the temporal lobe of the brain that includes the amygdala, fusiform gyrus, and posterior superior temporal sulcus. While it is clear that these regions are central to social and affective perception, we lack a mechanistic, causal understanding of the neurodynamics and circuit-level interactions in this network that give rise to our understanding of the actions, intentions, and emotional state of others. To fill this knowledge gap, we use MEG and electrophysiological recordings from electrodes implanted in the areas of the cortical-limbic network involved in social and affective in individuals undergoing surgical treatment for epilepsy. This gives us millisecond-level temporal resolution of how the activity within these regions codes for different facial expressions, intentional actions, inference of mental states of others, etc. We also use measures of neural communication to understand how information flows among these brain areas during social and affective perception. Finally, we use direct electrical stimulation of these brain areas through the iEEG electrodes to see how this stimulation can alter social and affective perception.

The aim of this work is to develop and test a model of social and affective information processing dynamics in the human brain. This model will yield translationally relevant and testable hypotheses regarding spatiotemporal neural targets for using brain stimulation to modulate social and affective perception in a controlled manner. This line of work can ultimately inform evidence-based stimulation therapies for disorders that involve aberrant social and affective perception, such as schizophrenia, depression, post-traumatic stress disorder (PTSD), and autism; for example by normalizing the aberrant emotional salience to benign, everyday visual information seen in PTSD.

Example of stimuli from each condition and event-related potential (ERP) waveforms in the fusiform face area (FFA). (Ghuman et al., 2014)

Our remarkable capacity to recognize a large variety of faces, words, and objects is critical to our ability to interact efficiently with the world around us. This capacity arises from regions of the temporal, occipital, and parietal lobes tuned to particular visual categories, such as faces, words, and bodies. We aim to understand the dynamic information processing stages that these brain networks use to recognize visual input. To accomplish this aim, we use iEEG and MEG recordings in conjunction with multivariate machine learning methods to decode the information being processed in these brain networks on a millisecond-by-millisecond basis. By understanding how the neural code related to different aspects of visual information changes in time, we can assess how the brain dynamically goes through different levels of representation. For example, we recently found that areas that process face information shift from a category level representation early (is it a face or not?) to an individual level representation later (which face is it?). We are further pursuing how network-level interactions may lead to these shifts in representation to understand what kinds of information processing and information flow underlie these shifts.

The average naming reaction time for words, letters, and faces under low stimulation (1-5 mA) and high stimulation (6-10 mA) to left mid-fusiform gyrus electrodes. (Manuscript in preparation)

We also use direct brain stimulation with the stimulation parameters informed by the multivariate neural code to try to modulate face and word recognition performance. The hope is that if we understand the neural code for faces and words well enough, we can "play back" this code to the brain and modulate perception in a controlled manner. This can both greatly advance our understanding of how the brain codes visual information and set the stage for brain stimulation-based visual prostheses for individuals with visual impairments and blindness.

The aim of this work is to develop and test a network-level, model of the dynamics underlying visual recognition in the human brain. Besides providing an understanding of the processes our brain uses visual information to recognize the things in the world around us, this model will also provide testable hypotheses regarding how abnormalities in these processes lead to disorders of visual recognition. Disorders of recognition of faces and words include not only prosopagnosia, alexia, dyslexia, but these processes have also been shown to be disturbed in disorders such as schizophrenia and autism. Thus, this work can lead to a better understanding of the neurobiological basis of these disorders.

Multi-Connection Pattern Analysis:
Connectivity Model
(Manuscript in preparation)

How do brain areas talk to one another? What code do they use? What information do they exchange? How does the structure of the brain influence how brain regions communicate? We use and are developing neural synchrony measures, multimodal network analyses, and multivariate decoding procedures to better understand the principles of neural communication and information exchange. We are developing novel multivariate machine learning approaches that will allow us to assess the information encoded in neural interactions. In addition, we are working on methods for assessing the properties of brain networks fusing multiple brain measurement modalities, such as MEG, functional MRI, structural MRI, near infrared spectroscopy (NIRS), and electroencephalography. One goal of this fusion is to have more robust measures of brain circuitry than any single measurement modality allows, which, for example, would allow for more accurate biomarkers of how networks differ in clinical populations. Another main goal is to understand how these measures of different brain properties relate to one another. This will allow us to assess structure-function relationships, neurovascular relationships, etc. across brain networks in ways that are not currently possible.

The aim of this work is to better understand the informational, biophysical, and physiological principles that underlie brain communication. This will allow us to ascertain the network-level information processing principles of the brain. Furthermore, achieving this aim will allow us to resolve how pathological differences in brain communication in psychiatric and neurological disorders arise from alterations in the underlying biology of the brain.

Contact Us


Laboratory of Cognitive Neurodynamics
UPMC Presbyterian
Suite B-400
200 Lothrop Street
Pittsburgh, PA 15213

(412) 648-0073
cogneurodynamics@pitt.edu

Funding Sources


Brain & Behavior Research Foundation

Defense Advanced Research Projects Agency (DARPA)

National Institute of Mental Health (NIMH)

National Science Foundation (NSF)

The Walter L. Copeland Fund

Related Links


Google Scholar (Dr. Ghuman)

University of Pittsburgh

Department of Neurosurgery

Brain Mapping Center

Center for the Neural Basis of Cognition (CNBC)

Society for Neuroscience

Brain Institute