About

Hi! I'm Pablo Arias (Sarah), a French/Colombian Postdoctoral researcher working at Lund University, in the Cognitive Science lab, in Sweden, and at IRCAM, in the Perception and sound design team, in France. Lately, I've been trying to hack human social interaction mechanisms using real time voice/face transformations. To do this, we've been developing a new videoconference experimental platform called DuckSoup, which will allow researchers to manipulate participants' voice and face in real time during social interactions.

I hold a a PhD in cognitive science from Sorbonne University, a Master of Engineering in digital technologies and multimedia from Polytech' Nantes, and a Master of Science in acoustics, signal processing and computer science applied to sound, from IRCAM. You can find a complete list of my publications here or follow me on twitter to keep up to date with my latest work.

Oh, and by the way, this site is under construction so you might see nonsense text here and there. Cheers!

News

Date Description
Upcoming in 2022 We will release our new experimental platform DuckSoup in 2022 🗓. DuckSoup is an opensource videoconference platform that will allow researchers to manipulate participants' facial and vocal attributes, such as smiles or intonations, in real time during social interactions. If you are interested in collecting big, synchronised & multicultural human social interaction data sets get in touch!
June, 2021 We won the Sorbonne Univeristy "Emergence" call with our REVOLT (Revealing human bias with real time vocal deep fakes) proposal!💥 This project is in collaboration with Nicolas Obin (SU). We'll be hiring a signal processing post-doc to develop real-time deep learning voice transformation algorithms in 2022. Get in touch if you are interested!
Sept, 2019 I'm moving to a new postdoctoral position at Lund University Cognitive Science in Sweden to work with Petter Johannsson and Lars Hall in the Choice Blindeness lab! We aim to create unprecedented methodological tools and experimental paradigms to study human social interaction mechanisms (more to follow).
Dec, 2018 Defended my PhD thesis entitled "The cognition of auditory smiles: a computational approach", which was evaluated by an inspring interdiscipilinary jury composed of biologist Tecumseh Fitch (Univ. Viena), computational psychologist Rachael Jack (Univ. Glasgow), computer scientist Catherine Pelachaud (SU), neurologist Martine Gavaret> (Paris Descartes), neuroscientists Julie Grezes (ENS) and Pascal Belin (Univ. Aix Marseille), psychoacoustician Patrick Susini (IRCAM), and mentor and friend Jean-Julien Aucouturier (CNRS).

Highlights

December, 2021

Facial mimicry in the congenitally blind

We have a new article out in Current Biology! We show that congenitally blind individuals facially imitate smiles heard in speech despite having never seen a facial expression. This demonstrates that the development of facial mimicry does not depend on visual learning and that imitation is not a mere visuo-motor process but a flexible mechanism deployed across sensory inputs. Check the full article here. Or check this twitter thread explaining the findings.

January, 2021

Beyond correlation: acoustic transformation methods for the experimental study of emotional voice and speech

We have a new article out in Emotion Review! In this article we present the methodological advantages of using stimulus manipulation techniques for the experimental study of emotions. We give several examples using such computational models to uncover cognitive mechanisms, and argue that such stimulus manipulation techniques can allow researchers to make causal inferences between stimulus features and participant's behavioral, physiological and neural responses.

April, 2018

Auditory smiles trigger unconscious facial imitation

We have a new article out in Current Biology 🥳 !! In this article we modeled the auditory consequences of smiles in speech and showed that such auditory smiles can trigger facial imitation in listeners even in the absence of visual cues. Interestingly, these reactions occur even when participants do not explicitly detect the smiles.

January, 2018

Uncovering mental representations of smiled speech using reverse correlation.

New article out in JASA-EL! We uncovered the spectral cues underlying the perceptual processing of smiles in speech using reverse correlation. The analyses revealed that listeners rely on robust spectral representations that specifically encode vowel’s formants. These findings demonstrate the causal role played by formants in the perception of smiles and present a novel method to estimate the spectral bases of high-level (e.g., emotional/social/paralinguistic) speech representations.