Image credit: Fusion: Full Body Surrogacy for Collaborative Communication © 2018 Keio University Graduate School of Design, The University of Tokyo
The Emerging Technologies displayed at SIGGRAPH 2018 will push the boundaries of human experience, the sense of self, and the data systems that make all of this possible. And Fusion: Full Body Surrogacy for Collaboration Communication is no exception. We sat down with contributor MHD Yamen Saraiji to discuss his work on this piece and to reflect on his work at SIGGRAPH 2017, MetaLimbs.
SIGGRAPH: Your SIGGRAPH 2017 project, MetaLimbs, was dubbed by more than one publication as a real-life Dr. Octopus (“Spider-Man” franchise) surrogate [The Verge, Nerdist]. It then went on to win the Best in Show Award from the Emerging Technologies program. How would you describe the effect winning this award had on propelling this new project, Fusion: Full Body Surrogacy for Collaboration Communication, forward?
MHD Yamen Saraiji (YS): Last year’s project, MetaLimbs, was one of our exploratory works of changing our bodies and re-engineering our limbs for augmentation, which we debuted to the public at SIGGRAPH 2017. Indeed, the event brought vast attention through the media, helping us spread our message of body engineering to a broader audience. This year, we started exploring a new direction: how to augment our bodies remotely to enhance telecollaborative applications. Fusion is also debuting at SIGGRAPH 2018 in Vancouver, within the Emerging Technologies program. We hope it will highlight the role of collective telecollaboration in enhancing communication remotely between multiple people.
SIGGRAPH: What inspired you to develop a full body surrogacy program? How did that inspiration affect the final product?
YS: A common issue we observed when using telecommunication tools and telepresence systems, in general, is the disjointed point of view; this results in lack of mutual understanding when collaborating on the same task. There has been some interesting work before, such as Jun Rekimoto’s Jack-In Head, which addressed this problem by sharing the vision of two people. Here, we decided to take a step forward and share the body of a surrogate person. Thus, a remote person can literally dive into the surrogate’s body and perform actions side-by-side with him [or her]. The final system is compact and designed as a wearable backpack that delivers stereo vision with a three-axis robot head and two anthropomorphic robot arms and hands. By wearing it, the surrogate’s body is shared with a different person. We metaphor this as “social morphology” in which many are morphed into one. Worth mentioning, [as part of the] SIGGRAPH 2016 Emerging Technologies [program], we demonstrated the opposite morphology of one-to-many through Layered Telepresence.
SIGGRAPH: Can you explain how telecollaboration is an integral component for Fusion to function? Why did you decide to make this technology a two-person experience?
YS: Through our research, we were exploring the future of body and actions sharing, in which multiple individuals can collectively work together on the same task, or even can transfer the knowledge and physical skills from the shared body. For Fusion, we focused on the complementary experience that can be achieved through a two-person system: operator and surrogate. By sharing the same point of view, the operator can clearly understand the situation the operator is facing, and thus provides the corresponding actions or advice necessary for the operator to follow. We realized that the system is not only limited for such scenarios, but can also guide the surrogate’s body posture by directing his arms so the surrogate can learn how to operate or navigate in an action-driven manner.
SIGGRAPH: What future real-world applications do you see resulting from your work on Fusion?
YS: Other than the proposed applications, we are interested in exploring the use of such wearable robotic technologies for different scenarios, such as rehabilitation and support. For example, a physician can have access to the physically impaired patient’s body to support his muscle movement and correct his motion, further understanding the difficulties the patient is experiencing. For industrial and profession-related applications, this technology can be used to train workers in factories or at the field in which physical skills can be transferred and taught by an expert, or to support them remotely from the shared point of view.
SIGGRAPH: What advice do you have for a first-time contributor attending SIGGRAPH 2018 this year?
YS: The conference will be a great experience on multiple levels. You will gain an immense amount of feedback from experts in your field and other fields. Prepare flyers and posters, as many attendees would be interested in getting more details about your exhibited work or university. You also might be asked to prepare a short presentation, so make sure your slides are ready!
If you are exhibiting your work, make sure to triple test your work and equipment prior to the conference as it will be an intensive week of demonstrations. Prepare backup hardware for any potential technical faults.
Last but not least, Vancouver is a fantastic city — make sure to rent a bike and cycle around Stanley Park!
[Registration for SIGGRAPH 2018 is open!]
MHD Yamen Saraiji received his M.Sc. and Ph.D. degrees in Media Design from Keio University, Japan in 2015 and 2018, respectively. His research, namely “Radical Bodies,” expands on the topic of the machines as an extension of our bodies, and emphasizes the role of technologies and robotics in reshaping our innate abilities and cognitive capacities. His work, which is experience driven, has been demonstrated and awarded at various conferences such as SIGGRAPH, Augmented Human, and CHI.