Unveiling the Illusion: Self-other Identity With Facial Morphing

by | 18 January 2024 | Conferences, Emerging Technologies

Image Credit: Ginza Sony Park Project

In this deep dive of “A Demonstration of Morphing Identity: Exploring Self-other Identity Continuum Through Interpersonal Facial Morphing,” we delved into the minds of Shunichi Kasahara, Jun Nishida, Kye Shimizu, and Santa Naruse, whose SIGGRAPH 2023 Emerging Technologies contribution blurred the boundaries of self and other like a funhouse mirror. This project may have you questioning the essence of who we are. Learn more about this awe-inspiring new technology that explores identity and paves the way for groundbreaking advancements in interactive technologies.

SIGGRAPH: Tell us about the process of developing “A Demonstration of Morphing Identity: Exploring Self-other Identity Continuum Through Interpersonal Facial Morphing.” What inspired you to pursue the project?

Shunichi Kasahara (SK), Jun Nishida (JN), Kye Shimizu (KS), and Santa Naruse (SN): This project was inspired by the experiences that we had during the COVID-19 pandemic. During the rise of more remote working and virtual conferences, we identified each other through our faces and our voices, which we could quickly manipulate through using computers. As we can manipulate our facial images now more than ever before through face filters and effects, we began to wonder how this would affect us and our diverse social relationships. The findings we discovered through iterating over multiple exhibitions were presented at CHI2020, “Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing Experience.”

SIGGRAPH: What was the biggest challenge you faced in its development?

SK, JN, KS, and SN: The biggest challenge we faced was making the experience enjoyable while tackling the issues created when making the system run in real time. Today, many AI apps and tools offer ways to manipulate voices, faces, and other features through analysis and rendering pipelines, but not in real time. By adopting open-source models and software, we were able to create a real-time experience that allows others to take part without having to wait.

SIGGRAPH: What did you discover when investigating the multifaceted interpersonal experience that “A Demonstration of Morphing Identity” creates?

SK, JN, KS, and SN: We discovered a continuum of how our identity and agency change throughout the morphing experience. For example, some participants felt that they couldn’t recognize that some changes were happening until the faces had completely swapped. Others felt entertained that someone else could be controlling their face. This enabled dialogue between the two participants, such as about their family connections, heritage, as well as their gender.

SIGGRAPH: How does “A Demonstration of Morphing Identity” lend itself to research on interactive devices for remote communication? How can it benefit remote communication?  

SK, JN, KS, and SN: Facial recognition and imagery are a cornerstone in remote communication, allowing us to grasp vital information and extend communication through our expressions and emotions. By modifying how we present ourselves to others, the research opens a wider question of how these technologies can act as a feedback mechanism to how we recognize ourselves and others. Though this research was done through only our faces, much work is still needed on how we can extend other modalities that we inherently have.

SIGGRAPH: How does the project address cultural diversity and inclusivity and interpersonal communication in its exploration of self-other identity?

SK, JN, KS, and SN: With the advent of synthetic media and creation technologies, we can create virtual identities of ourselves that depict our imaginations. This project not only allows for participants to engage in dialogue about our gender, race, and identities as they morph together but also talks about what versions of ourselves we would want to become. As we increasingly depict different versions of ourselves in our real life, social media, and virtual communities, we hope the project sparks engagement in how extending our modalities can benefit our wellbeing. 

SIGGRAPH: What’s next for “A Demonstration of Morphing Identity”?

SK, JN, KS, and SN: The technologies behind the project are moving faster, now more than ever, allowing us to explore other modalities, create higher quality depictions, and more meaningful engagements.


Shunichi Kasahara, Ph.D., currently serves as a project leader and researcher at Sony Computer Science Laboratories, Inc. (Sony CSL), a position he has held since 2014. Concurrently, he is a visiting researcher at the Okinawa Institute of Science and Technology Graduate University (OIST) from 2023 for running Cybernetic Humanity Studio — a collaboration between Sony CSL and OIST. He received his Ph.D. in interdisciplinary information studies from the University of Tokyo in 2017.

His professional career commenced at Sony Corporation in 2008, followed by a role as an affiliate researcher at MIT Media Lab in 2012. At present, Kasahara is leading research on “Cybernetic Humanity,” exploring the new humanity that is emerging from the integration of humans and computers. His significant contributions to the field have been presented at computer science conferences such as ACM CHI, UIST, SAP, and SIGGRAPH, and he has published in various scientific journals. His work also extends to the development of interactive exhibitions and the implementation of social solutions.

Jun Nishida, Ph.D., currently serves as an assistant professor at the University of Maryland, College Park, Department of Computer Science. Previously, he was a postdoctoral fellow at Human Computer Integration Lab (Prof. Pedro Lopes) at the University of Chicago. Jun holds a Ph.D. in human informatics from the University of Tsukuba.

He is interested in exploring interaction techniques where people can communicate their embodied experiences to support each other in the fields of rehabilitation, education, and design. To this end, Jun hopes to design wearable interfaces that share one’s embodied experiences across people by means of electrical muscle stimulation, exoskeletons, and virtual/augmented reality systems, along with psychological knowledge. Jun has been recognized with more than 40 awards including the ACM UIST Best Paper Award, Microsoft Research Asia Fellowship Award, and Forbes 30 Under 30.

Kye Shimizu is an interface designer and research engineer that works with different mediums to explore existing boundaries across different fields. His focus is exploring how computational technologies alongside mediums of expression can help facilitate understanding of how we perceive ourselves, in areas of communication, expression, and identity. He is a strong believer in doing things “in the wild”, observing and exploring research that cannot be discovered in traditional experiment settings. Excited to present in different situations, his work has been exhibited around the globe in institutions such as HeK Basel (Switzerland), Ars Electronica (Austria), and Miraikan (Japan). Furthermore, he is a strong communicator of his work, publishing and presenting works in academic conferences such as CHI and SIGGRAPH, as well as in other venues of design such as Design Indaba and Paris Fashion Week.

Santa Naruse is a visual artist born in 2000. He has been working on audio-visual and installation projects while exploring visual expression that combines real-time rendering technology and machine learning technology. 

Related Posts