Behind the Research: Light Field Imaging with Robert Twomey

This summer, for the first time ever, the North American SIGGRAPH conference awarded a “Best Art Paper” prize to one of the candidates within the Art Papers program. The winning paper was “Transforming the Commonplace through Machine Perception: Light Field Synthesis and Audio Feature Extraction in the Rover Project” by researchers Robert Twomey and Michael McCrea. Our blog team recently connected with Twomey to hear about his interests in research and art technology, the story behind the SIGGRAPH 2017 winning paper, and what the conference means to him. All comments are Robert’s own and do not necessarily reflect the opinion of his collaborator, Michael McCrea.

SIGGRAPH: Where did your interest in research begin? 

Robert Twomey (RT): My interest in research began in college. I was exposed to a variety of kinds of research through my education in the sciences and art: formal scientific method types of work, art historical research, and applied studio work. Each of these has its own distinct methods and expected outcomes. Professionally, I have worked in a neuroimaging castillo hinchable research lab studying elements of speech and language perception, an immersive cinema and experimental game lab with my M.F.A. adviser Sheldon Brown, and, most recently, am pursuing a Ph.D. in digital arts and experimental media.

My projects over the past 6-7 years juxtapose elements of human and machine perception. They involve research into new technologies — such as speech recognition, machine learning, and ubiquitous wireless sensing — to create digital artworks asking questions about our technologies and ourselves. I strive to create a mutually revelatory encounter, where we learn something about each part (machine and human), and am interested in how new technologies succeed in giving insight into our intimate selves as well as where they fail. In using machines as metaphors or surrogates to address human experience, I am joining a long artistic (and technical) tradition.

Good art is a kind of research. A side benefit of doing a lot of technical exploration in order to make our work is that we come up with techniques and applications that are potentially of interest to larger communities beyond fine arts.

We can then write them up, as we did with our SIGGRAPH paper this year.

SIGGRAPH: How did you first become involved with both art and technology? Was there a specific experience or mentor that drew you to this field?

RT: I have been interested in each of these, art and technology, for as long as I can remember. Both kinds of activities are things I have pursued throughout my life. I majored in painting and biomedical engineering in college, but for a long time I had no substantive overlap between them in my creative work. At some point, I was introduced to the work of artist/activist/engineer Natalie Jeremijenko and had the opportunity to work with her during my M.F.A. studies. She became a very important mentor and role model for me as I developed my own ways of combining art and technology. While my work has gone in a slightly different direction, I am still inspired by the ways she is able to navigate the discourses of art, science, and political engagement.

SIGGRAPH: Your final paper describes mechatronic, machine perception, and audio-visual synthesis techniques, what led you to focus your research on these particular areas? Do you plan to explore similar themes in future work?

RT: I’m interested in points where new technologies purport to represent, capture, or interface with essential human behavior. This has led me to projects that deal with language as a closed system through speech recognition technologies, computation as a metaphor for human cognition through chatbots and synthetic children’s speech and writing, and observational drawing reduced to computer vision and mechatronic automation to produce new representations of time and space. Our SIGGRAPH project, “Rover,” posits a machine explorer sent to document and interpret domestic space. That interest has expanded through my new project, “A Machine For Living In,” which explores the home as a site of intimate life through various machine perception and wireless sensing technologies. The larger arc of my research is framed through the idea of the machine as metaphor, employing new technologies to study sites of intimate life.

SIGGRAPH: Share a bit about your research process: How long did it take from ideation to development? What was the most challenging hurdle to overcome?

RT: This project represents a significant collaborative effort with my co-author, Michael McCrea. I initially developed my interest in light field imaging during a spectral modeling audio class in my Ph.D. program around 2013. Ren Ng’s (LYTRO) Fourier Slice Photography was an interesting visual analog to techniques we were studying in the audio domain. I explored the existing literature on light field imaging, notably Stanford’s early camera arrays and gantry systems, and began to develop my own light field imaging workflow with the mechatronic, structure from motion, and computer vision resources I could create.

This project, Rover, was my first major collaboration and a very positive experience. Michael and I took techniques that I had developed in the visual domain, techniques that he had developed in the audio domain, and built a project around the aesthetic qualities of it all. There was an incredible amount of labor that went into this project. When I look back at it, I can’t believe we got it done. In the space of six months, we developed mechatronic camera positioning systems with real-time controls, computer vision software, audio classification and clustering software, real-time audio synthesis, and GLSL shaders for working with lightfield datasets. Also, most importantly, we developed a frame for the work: how our efforts related to traditions of landscape photography, painting, travelogue literature, and machine explorers. It was an incredibly satisfying process that fed the desire to share those techniques and decisions with a savvy art and technology community.

SIGGRAPH: What made you decide to submit your project to SIGGRAPH’s research program? Any tips for 2018 submitters?

RT: We developed a diverse body of techniques to create our project and wanted to share them with the SIGGRAPH community. I figured that our project had a combination of artistic interest and “nerdy” technical appeal that would be interesting to SIGGRAPH attendees.

For 2018 submitters: One aspect of the whole experience that I really enjoyed was the back and forth with our reviewers in the revision process. It is wonderful to get input from scholarly peers, be able to make concrete changes to our writing, and watch the paper evolve into something even better than where it started. Also, be sure to flesh out your artistic methodology as well as any technical innovations.

SIGGRAPH: The theme for SIGGRAPH 2018 is “generations.” As a part of SIGGRAPH’s generations, what would you share with someone who plans to attend SIGGRAPH for the first time next summer?

RT: I’d encourage visitors to take a second and reflect on the rich history of art and technology in all of its different forms. The first SIGGRAPH conference ran in 1974, but artists have been exploring these creative possibilities of new technologies for centuries. Think of Rembrandt’s mirrors and lenses, or Brunelleschi’s pinhole viewing apparatus in creating linear perspective. It’s a marvel to see how far things have come, and artists are still on the front lines in critically engaging new technical possibilities.


Robert Twomey is an artist exploring the poetic intersection of human and machine perception. Trained as a painter and engineer, he integrates traditional forms with new technologies to examine questions of empathy, desire, and human-computer interaction. Twomey has presented his work at SIGGRAPH, the Museum of Contemporary Art San Diego, the Seattle Art Museum, and has been featured by Microsoft and Amazon. He received his B.S. from Yale University with majors in Art and Biomedical Engineering, an M.F.A. in Visual Arts from the University of California, San Diego, and is a Ph.D. Candidate in Digital Arts and Experimental Media at the University of Washington. He is currently an Assistant Professor of Digital Media at Youngstown State University.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.