Artigo Acesso aberto Revisado por pares

Hyperscanning: Beyond the Hype

2020; Cell Press; Volume: 109; Issue: 3 Linguagem: Inglês

10.1016/j.neuron.2020.11.008

ISSN

1097-4199

Autores

Antonia F. de C. Hamilton,

Tópico(s)

Neural dynamics and brain function

Resumo

Hyperscanning—the recording of brain activity from multiple individuals—can be hard to interpret. This paper shows how integrating behavioral data and mutual prediction models into hyperscanning studies can lead to advances in embodied social neuroscience. Hyperscanning—the recording of brain activity from multiple individuals—can be hard to interpret. This paper shows how integrating behavioral data and mutual prediction models into hyperscanning studies can lead to advances in embodied social neuroscience. Social interaction is central to our cognition and our health. Our actions and decision-making in everyday life are heavily influenced by others, and atypical social interactions are a feature of the majority of psychiatric and mental health conditions. Understanding the brain mechanisms of social interaction is therefore an important goal for human neuroscience. Hyperscanning, the measurement of brain activity from more than one individual at the same time (Figure 1A), has recently been hailed as a game-changer in the study of human social interaction (Gvirts and Perlmutter, 2019Gvirts H.Z. Perlmutter R. What Guides Us to Neurally and Behaviorally Align With Anyone Specific? A Neurobiological Model Based on fNIRS Hyperscanning Studies.Neuroscientist. 2019; 26: 108-116Crossref PubMed Scopus (29) Google Scholar). This paper takes a critical look at some of the claims that have been made for hyperscanning and the limitations of many analyses. We further suggest that, by integrating data from brain and behavior, it will be possible to move beyond the hype and realize the potential of this new domain. We focus primarily on functional near-infrared spectroscopy (fNIRS; Pinti et al., 2020aPinti P. Tachtsidis I. Hamilton A. Hirsch J. Aichelburg C. Gilbert S. Burgess P.W. The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience.Ann. N Y Acad. Sci. 2020; 1464: 5-29Crossref PubMed Scopus (168) Google Scholar) because fNIRS is one of the most widely used hyperscanning modalities, but we also consider EEG and electrophysiology. A good example of the hyperscanning genre is an influential paper from Cui and colleagues (Cui et al., 2012Cui X. Bryant D.M. Reiss A.L. NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation.Neuroimage. 2012; 59: 2430-2437Crossref PubMed Scopus (339) Google Scholar). This study captured data from the prefrontal cortex of pairs of participants performing two distinct tasks: a cooperation task where both participants must try to press a button at the same time, and a competition task where the two participants must try to press the button as fast as possible. The participants cannot communicate directly but receive feedback after each trial, which enables them to improve their performance. fNIRS signals from prefrontal cortex were analyzed with wavelet coherence (Figure 1B), which showed greater coherence in right superior frontal cortex during the cooperation blocks compared to competition or solo performance. A number of studies have replicated and extended the finding of coherence in prefrontal cortex when participants are engaged in a social interaction, including situations of conversation, eye contact, decision-making, and motor coordination tasks (Fishburn et al., 2018Fishburn F.A. Murty V.P. Hlutkowsky C.O. MacGillivray C.E. Bemis L.M. Murphy M.E. Huppert T.J. Perlman S.B. Putting our heads together: interpersonal neural synchronization as a biological mechanism for shared intentionality.Soc. Cogn. Affect. Neurosci. 2018; 13: 841-849Crossref PubMed Scopus (25) Google Scholar). Furthermore, differences in the strength of cross-brain coherence have been seen in ingroup compared to outgroup pairs (Yang et al., 2020Yang J. Zhang H. Ni J. De Dreu C.K.W. Ma Y. Within-group synchronization in the prefrontal cortex associates with intergroup conflict.Nat. Neurosci. 2020; 23: 754-760Crossref PubMed Scopus (25) Google Scholar), in mixed gender pairs compared to within gender pairs, and in relation to autistic traits in children (reviewed in Pinti et al., 2020aPinti P. Tachtsidis I. Hamilton A. Hirsch J. Aichelburg C. Gilbert S. Burgess P.W. The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience.Ann. N Y Acad. Sci. 2020; 1464: 5-29Crossref PubMed Scopus (168) Google Scholar). Based on these results, some strong claims have been made for the importance of hyperscanning in cognitive neuroscience and psychiatry. For example, brain-to-brain coupling has been described as a “mechanism for transmitting information… regarding remote events” (Hasson et al., 2012Hasson U. Ghazanfar A.A. Galantucci B. Garrod S. Keysers C. Brain-to-brain coupling: a mechanism for creating and sharing a social world.Trends Cogn. Sci. 2012; 16: 114-121Abstract Full Text Full Text PDF PubMed Scopus (469) Google Scholar) or as a “mechanism of shared intentionality” (Fishburn et al., 2018Fishburn F.A. Murty V.P. Hlutkowsky C.O. MacGillivray C.E. Bemis L.M. Murphy M.E. Huppert T.J. Perlman S.B. Putting our heads together: interpersonal neural synchronization as a biological mechanism for shared intentionality.Soc. Cogn. Affect. Neurosci. 2018; 13: 841-849Crossref PubMed Scopus (25) Google Scholar). One paper suggests that brain-to-brain coupling “might trigger the neural mechanism guiding social alignment” (Gvirts and Perlmutter, 2019Gvirts H.Z. Perlmutter R. What Guides Us to Neurally and Behaviorally Align With Anyone Specific? A Neurobiological Model Based on fNIRS Hyperscanning Studies.Neuroscientist. 2019; 26: 108-116Crossref PubMed Scopus (29) Google Scholar). It has been suggested that hyperscanning measures might allow us to diagnose developmental and psychiatric disorders (Leong and Schilbach, 2019Leong V. Schilbach L. The promise of two-person neuroscience for developmental psychiatry: using interaction-based sociometrics to identify disorders of social interaction.Br. J. Psychiatry. 2019; 215: 1-3Crossref Scopus (18) Google Scholar) and that using brain stimulation to impose coupling on participants might even treat such disorders (Gvirts and Perlmutter, 2019Gvirts H.Z. Perlmutter R. What Guides Us to Neurally and Behaviorally Align With Anyone Specific? A Neurobiological Model Based on fNIRS Hyperscanning Studies.Neuroscientist. 2019; 26: 108-116Crossref PubMed Scopus (29) Google Scholar). These are ambitious claims for an emerging technology and deserve careful and detailed examination. Hyperscanning studies clearly show that there are robust patterns of coherence across brains, but the best way to interpret this data is less clear. Claims that there is a mechanism extending across more than one brain can feel like telepathy, disconnected from our standard models of cognition or from a plausible biological framework. To obtain a better understanding of what cross-brain coherence actually means and the challenges faced by this new domain, it helps to first understand some basic limitations of coherence measures. First, these measures describe the relationship between two time series of data, but do not take into account any other factors: changes in participant behavior or the task or testing environment do not feature in most coherence analyses. Second, coherence measures show only symmetrical effects, where both brains show the same pattern of change. Many social interactions (e.g., giving/taking an object, turn-taking in conversation) are asymmetric, with two participants having different roles, but it is not clear if coherence analysis can capture this. Third, the relationship between hyperscanning results and traditional models of cognition is not clear—because the analysis is specific to the two-person situation, hyperscanning studies sometimes do not examine or interpret the one-person contrasts that a traditional cognitive neuroscientist might expect to see. An even more important challenge for the interpretation of hyperscanning data is the relationship between cross-brain coherence in hyperscanning (Figure 1A) and inter-subject correlations recorded in sequential studies (Figure 1C). We know that two people each viewing the same movie alone in an MRI scanner show similar brain activity patterns and that this effect is even seen if one person tells a story and the other listens (Hasson et al., 2012Hasson U. Ghazanfar A.A. Galantucci B. Garrod S. Keysers C. Brain-to-brain coupling: a mechanism for creating and sharing a social world.Trends Cogn. Sci. 2012; 16: 114-121Abstract Full Text Full Text PDF PubMed Scopus (469) Google Scholar). These results reflect the fact that common cognitive processing of an external stimulus can give rise to coherence between brains. As recently pointed out for EEG data (Burgess, 2013Burgess A.P. On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note.Front. Hum. Neurosci. 2013; 7: 881Crossref PubMed Scopus (81) Google Scholar), coherence in fNIRS could arise because both participants respond in the same way to common environmental stimuli. Thus, it could be argued that the robust coherence patterns reported in many hyperscanning studies reveal the similar processing of the shared environment and common task set experienced by the two participants and not any additional social factors. There are two feasible responses to the challenge of common processing. One is to design more complex methods that factor out the common effects. Several studies do this by increasing the number of participants tested. For example, if a study records data from three (or more) people who are in the same room at the same time, and two of those participants show strong coherence while the others do not, it is hard to argue that the matched sensory environment is driving the coherence (Fishburn et al., 2018Fishburn F.A. Murty V.P. Hlutkowsky C.O. MacGillivray C.E. Bemis L.M. Murphy M.E. Huppert T.J. Perlman S.B. Putting our heads together: interpersonal neural synchronization as a biological mechanism for shared intentionality.Soc. Cogn. Affect. Neurosci. 2018; 13: 841-849Crossref PubMed Scopus (25) Google Scholar; Yang et al., 2020Yang J. Zhang H. Ni J. De Dreu C.K.W. Ma Y. Within-group synchronization in the prefrontal cortex associates with intergroup conflict.Nat. Neurosci. 2020; 23: 754-760Crossref PubMed Scopus (25) Google Scholar). However, it remains possible that common motor processing or common cognitive processing between the interacting participants drives these effects. A second option is to accept the argument that fNIRS hyperscanning can only measure common processing between two brains and use this as relevant outcome. Studies of individual differences in coherence levels or studies of coherence across social groups typically take the overall coherence level as an index of how well people are coordinating. Recent data shows that this can correlate with performance on a social decision-making task (Yang et al., 2020Yang J. Zhang H. Ni J. De Dreu C.K.W. Ma Y. Within-group synchronization in the prefrontal cortex associates with intergroup conflict.Nat. Neurosci. 2020; 23: 754-760Crossref PubMed Scopus (25) Google Scholar). Such studies typically require very large sample sizes in order to relate dyad-level differences in coherence to dyad-level differences in behavior. However, it may be hard or impossible to pin down any effects to one person in the pair. This limits the usefulness of coherence analyses for the tracking or diagnosis of clinical populations, where results must be robust at a single subject level and it must be possible to define which individual within a pair or group is showing atypical brain activity. Finally, if coherence measures in a hyperscanning context really are measuring exactly the same thing as intersubject correlations used in an fMRI context, then asking participants to watch a movie alone might give more robust and repeatable results for clinical use. The pioneering studies of human hyperscanning have made an important contribution in showing what can be done in a new domain. However, the limitations described above mean that the claim that hyperscanning will revolutionize social neuroscience might seem like just hype. If coherence measures are unable to consider behavior or relate to traditional models and cannot distinguish interaction from common processing, then the value of hyperscanning might seem limited. To move beyond the hype, we may need to do bigger, better experiments and interpret them within a stronger theoretical framework. An important starting point is the recognition that interacting brains exist within interacting bodies. Visual, auditory, and motor processes mediate any coordination between two brains, so we must study brains and bodily coordination together. This means that labs must capture and analyze behavioral data from their participants, considering how each individual in a social interaction uses their actions to signal to others and receives social information from others. There are now an increasing number of flexible technologies available to capture participants’ hand and body movements, eye movements, facial expressions, and physiological changes, and such systems should be standard in a hyperscanning lab (Figure 1D). These tools also mean that it is possible, even desirable, to move away from traditional experimental designs where discrete trials are repeated many times. Such methods give tight control to an experimenter but do not create a natural and meaningful social interaction where participants can engage in a variety of social behaviors. Moving toward more natural task scenarios, where people can work together over a longer period of time, observing each other and responding to each other, might reveal different patterns of social behavior or social brain activity that are suppressed in the controlled lab situation. By capturing and analyzing data from both brain and body while participants are engaged in natural interactions, it will be possible to understand how the coordination of social brains is embodied in the interaction of social bodies. We also need a neurocognitive theory within which to understand this kind of interaction. A strong candidate emerges if we focus on what participants are actually doing during a social interaction. They must control their own motor behavior, moving hands, face, and gaze to interact with their partner, and they must also understand and predict their partner’s motor behavior in order to act appropriately. Both acting and predicting are very general requirements—whether people are cooking a meal together, playing a piano duet, or taking turns in conversation, the requirement to predict one’s partner and perform one’s own action is essential to a fluent social interaction. This idea has recently been formalized in the mutual prediction theory (Kingsbury et al., 2019Kingsbury L. Huang S. Wang J. Gu K. Golshani P. Wu Y.E. Hong W. Correlated Neural Activity and Encoding of Behavior across Brains of Socially Interacting Animals.Cell. 2019; 178: 429-446.e16Abstract Full Text Full Text PDF PubMed Scopus (75) Google Scholar). The central claim here is that each individual in a social interaction has brain systems that control their own behavior (A-self and B-self in Figure 1E) as well as brain systems that predict the behavior of their partner (A-Other and B-Other in Figure 1E). If these systems are co-localized in the brain and all the neural activity of both systems is summed for a crude measure, then there will be a pattern of similarity or coherence between the overall brain activity of individual A and individual B. Thus, when prediction mechanisms that operate within the brain of a single individual are engaged in two people who mutually predict each other, this can give rise to signals that are coherent across the two brains. Evidence in support of mutual prediction theory was found in a detailed study of pairs of interacting mice. Kingsbury and colleagues used microendoscopic calcium imaging to record from hundreds of neurons in the dorsomedial prefrontal cortex (dmPFC) of pairs of mice while their behavior was tracked in natural social interactions. They identified neurons that encoded the mouse’s own actions and other neurons that encoded the actions of the partner mouse, consistent with the mutual prediction model. More importantly, the summed activity across the PFC showed coherence across the two animals, demonstrating that mutual prediction at a fine-grained level can give rise to patterns of cross-brain coherence at a whole-brain level. Furthermore, they could quantify the asymmetry in the social relationship between the mice, with subordinate mice showing more prediction of their partner than dominant mice. Data were analyzed by adding behavioral data and cross-brain data to a traditional general linear model (GLM) in order to model the activity of one individual’s brain in terms of both their behavior and their partner’s behavior and brain activity (cross-brain GLM or xGLM; Figure 1F). The outputs of this analyses allow the researcher to quantify the contribution of each different factor within the model (self-behavior, other-behavior, task, and cross-brain contributions) and can be interpreted in relation to traditional cognitive neuroscience. Thus, the xGLM analysis allows researchers to integrate data across different modalities in freely moving animals engaged in a social interaction and test the mutual prediction hypothesis in a tractable fashion. To test if mutual prediction also applies in humans, it will be useful to use xGLM approaches in modeling data from human hyperscanning studies. Initial hints that this can work are seen in a recent study (Pinti et al., 2020bPinti P. Devoto A. Greenhalgh I. Tachtsidis I. Burgess P. Hamilton A. The role of anterior prefrontal cortex (area 10) in face-to-face deception measured with fNIRS.Soc. Cogn. Affect. Neurosci. 2020; 2020: 1-14Google Scholar) in which pairs of participants play a card game similar to poker—one participant has the role of informer and can choose to lie or tell the truth about her card, while her partner must guess if the card is high or low and will win points for a correct answer. Activity across prefrontal cortex in the informer was linked to deceptive intent. More critically, an xGLM analysis with a range of time lags identified two channels in the informer’s brain where activity patterns reliably preceded similar activity in the guesser’s brain with a 2 s time lag. This provides a clue to mutual prediction in humans, but unfortunately this study used a task where it is very challenging for participants to accurately predict each other (thus making it hard to see mutual prediction signals) and did not record behavioral or physiological factors in sufficient detail to implement a full xGLM approach. Future studies that directly contrast cooperative and competitive tasks with and without the ability to predict and with more detailed recordings of social behavior will be very valuable. These two studies provide initial evidence that the mutual prediction theory can account for patterns of cross-brain coherence in both mice and humans engaged in a social interaction. The theory can also help us understand older datasets like the original hyperscanning study from Cui et al., 2012Cui X. Bryant D.M. Reiss A.L. NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation.Neuroimage. 2012; 59: 2430-2437Crossref PubMed Scopus (339) Google Scholar described above. That study compared a “cooperation” task to a “competition” task and has sometimes been interpreted in terms of cross-brain mechanisms of cooperation. However, close examination of the cognitive demands of the tasks shows that the cooperation task requires a high degree of mutual prediction—participants must predict each other’s movements in order to act at the same time. In contrast, the competition task requires participants to move as fast as possible, and predicting the partner would only be a distraction. Thus, the differences in cross-brain coherence patterns between these two tasks are entirely consistent with the mutual prediction hypothesis. It is possible to make the further prediction that cross-brain coherence would not be seen if participants engaged in a cooperation task that does not demand mutual prediction, such as winning points if both people move faster than an external limit. Overall, studies of mutual prediction demonstrate that examining brain and behavior together helps us understand how bodily actions mediate and implement social interactions. Within this idea, the mutual prediction hypothesis provides a strong framework to interpret hyperscanning studies of human and animal social interactions and has several advantages. First, it proposes a specific predictive mechanism that causes cross-brain coherence based on detailed animal data and in line with more general predictive models of cognition. Second, it motivates and enables the integration of brain and behavior into a unified model in which the social brain is working to predict and enact social behavior. Third, the xGLM approach to data analysis builds on a long tradition of research in cognitive neuroscience, allowing us to extend existing models to include behavioral and cross-brain data and model both symmetric and asymmetric interactions. This means it will be feasible to analyze data from a wider range of contexts and integrate new hyperscanning results with our fundamental theories of cognitive neuroscience. Finally, the examples above demonstrate how the study of interacting brains and bodies together can be applied across both controlled lab settings and unstructured ecologically valid interactions, allowing us to move social neuroscience research into the real world. This short review has described the challenges of interpreting cross-brain coherence data and suggests two new ways to overcome these challenges. In terms of designing research, if we can collect and integrate data from interacting bodies (hands, faces, eyes, speech) as well as interacting brains, then we have the potential to understand how embodied interactions cause cross-brain coherence effects. The analysis of data with xGLMs provides one way to accomplish this integration, but other methods such as Granger causality might also be valuable. In terms of neural mechanisms, the mutual prediction hypothesis gives us a clear framework within which to interpret data from different cognitive tasks, as well as make specific predictions for future research studies. By combining these advances with new methods for tracking and interpreting natural social behavior, it will be possible for hyperscanning to go beyond the recent hype and to reveal neurocognitive mechanisms that support our human face-to-face social interactions.

Referência(s)