About

Hi! I'm Pablo Arias-Sarah, a French/Colombian Lecturer working at the University of Glasgow, in the School of Psychology and Neuroscience. I study human social interactions using real time voice/face transformations. To do this, we developed a videoconference experimental platform called DuckSoup, which enables researchers to transform participants' voice and face (e.g. increase participants' smiles or their vocal intonations) in real time during free social interactions. I am interested in human social communication, social biases and human enhancement.

I hold a PhD in cognitive science from Sorbonne University (Paris, France), a Master of Engineering in digital technologies and multimedia from Polytech' Nantes (Nantes, France), and a Master of Science in acoustics, signal processing and computer science applied to sound, from IRCAM (Paris, France). You can find a complete list of my publications here or follow me on twitter to keep up to date with my latest work.

Career News

Date Description
October 2024 I started a permanent position in the School of Psychology and Neuroscience in the University of Glasgow as part of the Center for Social Cognitive and Affective Neuroscience! ๐ŸŽ‰
November 2022 We were awarded a prestigious Vetenskapsrรฅdet grant from the Swedish Research Council to develop our new platform DuckSoup in collaboration with Petter Johanson and Lars Hall.
October 2022 Moving to Scotland to start a new position as Marie Curie Fellow in the University of Glasgow in the School of Psychology and Neuroscience with Philippe Schyns and Rachael Jack. In collaboration with Lund University Cognitive Science. Super psyched! ๐Ÿคฉ
June 2022 I won an Individual Marie Curie postdoctoral fellowship for my proposal SINA (Studying Social Interactions with Audiovisual Transformations). In collaboration with Rachael Jack, Philippe Schyns (Glasgow University) and Petter Johansson (Lund University)! ๐Ÿ’ฃ
June 2021 We were awarded the Sorbonne Univeristy Emergence grant for our project REVOLT (Revealing human bias with real time vocal deep fakes) proposal, in collaboration with Nicolas Obin (Sorbonne Univeristy) ๐Ÿ’ฅ.
Sept 2019 I'm starting a new postdoctoral position at Lund University Cognitive Science in Sweden to work with Petter Johannsson and Lars Hall in the Choice Blindness lab! We aim to create unprecedented methodological tools to study human social interaction mechanisms.
Dec 2018 Defended my PhD thesis entitled The cognition of auditory smiles: a computational approach", which was evaluated by an inspring jury composed of Tecumseh Fitch (Univ. Viena), Rachael Jack (Univ. Glasgow), Catherine Pelachaud (Sorbonne University), Martine Gavaret (Paris Descartes), Julie Grezes and Pascal Belin (Univ. Aix Marseille), Patrick Susini (IRCAM) and Jean-Julien Aucouturier (CNRS).

Research Highlights

November, 2024

Aligning the smiles of dating dyads causally increases attraction โค๏ธ

We have a new article out in PNAS! We asked participants to take part in a speed-dating experiment, while we aligned (๐Ÿ˜Š vs ๐Ÿ˜Š) or misaligned (๐Ÿ˜Š vs ๐Ÿ˜•) their smiles in real-time with our face transformation algorithms. While participants remained unaware of the manipulations, aligned smiles enhanced their romantic attraction, compared to unaligned scenarios. Therefore, we causally manipulated the emergence of romantic attraction in free social interactions. This demonstrates the potential of our experimental platform DuckSoup, supports alignment theories and raises important ethical questions about transformation filters! A titanseque effort that we are delighted to publish in PNAS! Check this twitter thread or the manuscript for more information.

September, 2024

Mozza is now open-source! ๐Ÿ‘จ๐Ÿพโ€๐Ÿ’ป ๐Ÿ˜•โ†’๐Ÿ˜Š

We are releasing in open-source our gStreamer plugin Mozza, which enables users to parametrically transform the facial smiles in a video feed either in real-time or offline. The open source code is here. This is an implementation of Arias 2018 IEEE TACs.

September, 2023

DuckSoup is in public beta! ๐Ÿฅณ๐Ÿฅณ

We are releasing a public beta of our new experimental platform DuckSoup ๐Ÿ—“. DuckSoup is an open source videoconference platform enabling researchers to manipulate participants' facial and vocal attributes in real time during social interactions. If you are interested in collecting large, synchronised & multicultural human social interaction data sets, get in touch! Check out a project description here ๐Ÿงžโ€โ™‚๏ธ and the open source code here ๐Ÿง‘๐Ÿฝโ€๐Ÿ’ป.

April, 2023

Pupil dilation reflects the dynamic integration of audiovisual emotional speech

New article out in Scientific reports! ๐Ÿ˜ We investigated if pupillary reactions ๐Ÿ‘€ can index the processes underlying the audiovisual integration of emotional signals (๐Ÿ˜Š๐Ÿ˜ฑ๐Ÿ˜ฎ). We used our audiovisual smiles algorithms to create congruent/incongruent audiovisual smiles and studied pupillary reactions to manipulated stimuli. We show that pupil dilation can reflect emotional information mismatch in audiovisual speech. We hope to replicate these findings in neurodivergent populations to probe their emotional processing. Check the full article here. Or check this twitter thread explaining the findings.

September, 2022

Production Strategies of Vocal Attitudes

New article out in Interspeech! ๐Ÿ—ฃ๏ธ We analysed a large multispeaker dataset of vocal utterances and characterised the acoustic strategies used by speakers to communicate social attitudes using deep alignment methods. We produced high-level representations of speakersโ€™ articulation (e.g. Vowel Space Density) and speech rhythm. We hope to use these measures to provide an objective validation method of deep voice conversions methods. Check the full article here.

December, 2021

Facial mimicry in the congenitally blind

We have a new article out in Current Biology! We show that congenitally blind individuals facially imitate smiles heard in speech despite having never seen a facial expression. This demonstrates that the development of facial mimicry does not depend on visual learning and that imitation is not a mere visuo-motor process but a flexible mechanism deployed across sensory inputs. Check the full article here. Or check this twitter thread explaining the findings.

January, 2021

Beyond correlation: acoustic transformation methods for the experimental study of emotional voice and speech

We have a new article out in Emotion Review! In this article we present the methodological advantages of using stimulus manipulation techniques for the experimental study of emotions. We give several examples using such computational models to uncover cognitive mechanisms, and argue that such stimulus manipulation techniques can allow researchers to make causal inferences between stimulus features and participant's behavioral, physiological and neural responses.

April, 2018

Auditory smiles trigger unconscious facial imitation

We have a new article out in Current Biology ๐Ÿฅณ !! In this article we modeled the auditory consequences of smiles in speech and showed that such auditory smiles can trigger facial imitation in listeners even in the absence of visual cues. Interestingly, these reactions occur even when participants do not explicitly detect the smiles.

January, 2018

Uncovering mental representations of smiled speech using reverse correlation.

New article out in JASA-EL! We uncovered the spectral cues underlying the perceptual processing of smiles in speech using reverse correlation. The analyses revealed that listeners rely on robust spectral representations that specifically encode vowelโ€™s formants. These findings demonstrate the causal role played by formants in the perception of smiles and present a novel method to estimate the spectral bases of high-level (e.g., emotional/social/paralinguistic) speech representations.