Skip to Main content Skip to Navigation
Journal articles

" Hearing faces and seeing voices " : Amodal coding of person identity in the human brain

Abstract : Recognizing familiar individuals is achieved by the brain by combining cues from several sensory modalities, including the face of a person and her voice. Here we used functional magnetic resonance (fMRI) and a whole-brain, searchlight multi-voxel pattern analysis (MVPA) to search for areas in which local fMRI patterns could result in identity classification as a function of sensory modality. We found several areas supporting face or voice stimulus classification based on fMRI responses, consistent with previous reports; the classification maps overlapped across modalities in a single area of right posterior superior temporal sulcus (pSTS). Remarkably, we also found several cortical areas, mostly located along the middle temporal gyrus, in which local fMRI patterns resulted in identity " cross-classification " : vocal identity could be classified based on fMRI responses to the faces, or the reverse, or both. These findings are suggestive of a series of cortical identity representations increasingly abstracted from the input modality. • Local patterns of cerebral activity measured with fMRI can classify familiar faces or voices. • Overlap of face-and voice-classifying areas in right posterior STS. • Cross-classification of facial and vocal identity in several temporal lobe areas. The ability to recognize familiar individuals is of high importance in our social interactions. The human brain achieves this by making use of cues from several sensory modalities, including visual signals from the face of a person and auditory signals from her voice 1,2. There is evidence that these cues are combined across senses to yield more accurate, more robust representations of person identity—a clear case of multisensory integration 3–5. For instance, familiar speaker recognition is faster and more accurate when the voice is paired with a time-synchronized face from the same individual than when presented alone, and slower and less accurate when paired with the face of a different individual 3. The contribution of different sensory modalities to person perception is acknowledged in particular by cog-nitive models such as Bruce and Young (1986)'s model of face perception. Specifically they proposed the notion of " person identity nodes " (PINs): a portion of associative memory holding identity-specific semantic codes that can be accessed via the face, the voice or other modalities: it is the point at which person recognition, as opposed to face recognition, is achieved 6,7. Whether the PINs have a neuronal counterpart in the human brain remains unclear, in part owing to the fact that most studies of person recognition—either using neuropsychological assessment of patients with brain lesions, or neuroimaging techniques such as functional magnetic resonance imaging (fMRI) in healthy volun-teers—have focused on single modality, mostly face, then, far second, voice; only few studies have investigated the cerebral bases of person recognition based on more than one sensory modality 1,4. Lesion and neuroimaging studies have suggested potential candidate cortical areas for the PINs, including the precuneus 8 , parietal and hippocampal regions 9–11 , posterior superior temporal sulcus (pSTS) 12,13 or the anterior temporal lobes 14. However a PIN, as defined in Bruce & Young (1986), would correspond to a patient with a brain
Document type :
Journal articles
Complete list of metadatas
Contributor : Pascal Belin <>
Submitted on : Thursday, February 16, 2017 - 10:01:31 AM
Last modification on : Thursday, December 19, 2019 - 12:18:02 PM


Awwad Shiek Hasan_SciRep16.pdf
Publication funded by an institution




Bashar Awwad Shiekh Hasan, Mitchell Valdes-Sosa, Joachim Gross, Pascal Belin. " Hearing faces and seeing voices " : Amodal coding of person identity in the human brain. Scientific Reports, Nature Publishing Group, 2016, ⟨10.1038/srep37494⟩. ⟨hal-01469009⟩



Record views


Files downloads