Image credit: Amazon Sumerian hosts © 2019 Amazon Web Services
At SIGGRAPH 2018, Leo Chan taught two, hands-on workshops in the Studio that walked developers through how to create a virtual host and immersive scene, respectively, with Amazon Sumerian. We sat down with Leo to learn more about his workshop, Amazon Sumerian and what has inspired his career in computer graphics. The call for submissions for the SIGGRAPH 2019 Studio is open until next Tuesday, 12 February.
SIGGRAPH: What are some key takeaways that you hope attendees of your workshops left with?
Leo Chan (LC): Our customers wanted to create AR/VR experiences without requiring specialized skills, and also to simplify the publishing and distribution of these experiences. In the workshop, attendees with no prior experience built an interactive, multi-platform app, including technologies that a few years ago would require deep engineering investments, such as natural language processing, speech recognition, and responsive 3D avatars. Attendees did this by leveraging cloud services to do the “heavy lifting” on demand, and publishing their AR/VR creations via a URL utilizing webGL (and soon webXR).
SIGGRAPH: Technically speaking, what advice do you have for developers getting started in Amazon Sumerian?
LC: First and foremost, the best way is get started is to sign up for an Amazon Web Services (AWS) free-tier account and start experimenting with the tool. There’s no software to download or install. All you need is an up-to-date Chrome or Firefox browser. Once you’re up and running, the Amazon Sumerian Documentation website includes tutorials and courses, and is a great place to start learning. We also have a YouTube series, which features Twitch broadcasts that cover a broad range of topics. Finally, we have a public Slack channel that is a great resource for asking questions and getting in touch with others in the Amazon Sumerian community.
SIGGRAPH: Now, let’s talk about you. You worked for a number of reputable computer graphics organizations before joining Amazon 5 years ago to focus on augmented and virtual reality engineering. What spurred the career move?
LC: When I started my career in 3D graphics in 1996 building seminal 3D packages, such as Maya and Houdini, everyone had to be super creative to get around the CPU, memory, and storage limitations of the time. I loved that. It required lots of tricks, compromises and “cheats.”
I find that often creativity requires limitations in order to thrive.
As the industry and the hardware matured and grew — and the limitations eased off — my creative thirst shifted to the depth of storytelling enabled by animated CG films. I loved working at Pixar Canada and being part of the Pixar creative process. After that, working on CG for films lost a bit of its luster for me. In some ways, I’d fulfilled a lifelong dream and was asking myself, “What’s next?”
I wanted to return to the feeling of those early days of 3D CG, where creativity was born from all of those technical limitations. That’s when I started getting interested in the nascent AR/VR technologies. AR/VR presents many interesting technical challenges. Applications often run on mobile devices with limited resources, and require real-time rendering at high stereoscopic resolutions and refresh rates, often beyond the capabilities of the best GPUs — much less the mobile ones. It’s a creative challenge to figure out how to work within these limitations.
I am also fascinated by the change in the way we interact with computers that AR/VR offers. In many AR/VR applications, you’re unable to type and in VR you can’t see your hands, so new modes of interaction are needed beyond the keyboard, mouse, and hand controller. AR/VR applications often need to be “context aware” of their environments and be able to listen and speak to you in order to be hands-free. I’ve enjoyed working with natural language processing and image recognition technologies to make “smarter” applications that start to feel like they have some intelligence about what is happening around you and can speak with you naturally. I get excited thinking of what people will build with the AR/VR creation tools we’re making. When I was part of the teams building Maya and Houdini, I had a similar feeling about what creations might be possible someday. Twenty years later, it feels great to look back at all of the films and video games that we helped make possible. I can only imagine what lies ahead for AR/VR.
SIGGRAPH: Throughout your career, what continuously inspires you to create and develop?
LC: Berj Bannayan, founder of Soho VFX, and I started out in graphics together as students and good friends at the University of Waterloo. I remember he once said to me that he was attracted to digital artworks that were only possible to create with a computer, such as 3D CGI, as opposed to ones that imitated a non-digital medium, such as a painting program. I felt the same way and this has really stuck with me over my career, whether it be in the creation of animated short films at Pixar Canada, or interactive concert visuals with Greg Hermanovic at Derivative Inc., or now the AR/VR environments we’re enabling with Amazon Sumerian.
SIGGRAPH: Share a favorite SIGGRAPH memory.
LC: Without a doubt my favorite SIGGRAPH memory is the Interactive Dance Club from SIGGRAPH 1998. It was such an amazing moment where so many graphics professionals volunteered their time and talents to take on huge technical challenges to truly push the technology envelope of the time in creating a shared, interactive, and fully immersive experience for conference attendees.
Leo Chan is a senior software engineer at Amazon with 15 years of technical team management experience in small, medium, and large organizations. He is an industry-proven expert in 3D/2D computer graphics software, pipelines, and techniques, and has researched and developed novel algorithms in computer graphics and animation, including Inverse Kinematics, Character Deformation, Fur & Hair, Simulated Facial Aging, Volumetric Rendering, Real-time Performance Animation, and Computer Facial Recognition from Images.