How Smart Is Your Smartphone?

by | 31 October 2019 | Conferences, Mobile, Research

Jim Hagarty © 2019 ACM SIGGRAPH

In a time where we rely on our smartphones to do nearly everything, it feels like the technology has been pushed as far as it can go. But, new versions are released and, once again, our minds are blown. What’s next? Can our smartphones actually get “smarter”? The following roundup of SIGGRAPH 2019 Technical Papers dives into just how much more the technology is capable of and what new developments we might see in the future for our beloved devices.

Distortion-Free Wide-Angle Portraits on Camera Phones

Vertical shots may soon be a thing of the past thanks to “Distortion-Free Wide-Angle Portraits on Camera Phones,” a paper presented at SIGGRAPH 2019 by YiChang Shih (Google Inc.), Wei-Sheng Lai (Google Inc., University of California, Merced), and Chia-Kai Liang (Google Inc., University of California, Merced). The research allows for high-quality results on a wide range of field-of-views from 70° to 120°. Unlike most distorted wide-angle shots on camera phones, the proposed method uses a new algorithm that corrects for skewing without the need for professional editing.

According to their paper, by computing the the subject mask to assign per-vertex weights on a coarse mesh over the input image, they are able to formulate energy terms that encourage facial vertices to locally emulate the stereographic projection — a conformal mapping between a sphere and a plane — for distortion restoration.

Using this algorithm, the researchers found that the new energy function performs effectively and reliably for a large group of subjects in the photo, and could be used in the future to adjust for the fisheye output that users often create when trying to produce a wide-angle shot with today’s camera phones.

Single Image Portrait Relighting

Why apply a filter when your phone can apply one for you? Proving that your smartphone can, and should, a research team from Google and the University of California, San Diego developed “Single Image Portrait Relighting,” which they presented at SIGGRAPH 2019. The project explains a learning-based portrait-relighting system that can change the lighting condition of a portrait taken under an unconstrained environment to any provided environmental lighting.

The system works by creating a neural network that uses a single RGB image of a portrait taken with a standard cellphone camera in an unconstrained environment, and uses that image to create a relit image of that subject as though it were illuminated according to any provided environment map.

The team notes that because the technology is able to produce a 640×640 image in only 160 milliseconds, it may enable interactive, user-facing photographic applications in the future. Applications for this technology are endless and not limited to professional use, but rather improving the everyday user’s experience.

Vidgets: Modular Mechanical Widgets for Mobile Devices

Smartphones are used for more than just a quick call or text, and should be augmented to accommodate all various applications that users demand. Researchers from Columbia University and Snap Inc. offer just that with their technical paper “Vidgets: Modular Mechanical Widgets for Mobile Devices,” which they presented in Los Angeles in July. “Vidgets” is a a family of mechanical widgets, including push buttons and rotary knobs, that augment mobile devices with tangible user interfaces.

With the consumer need in mind — to utilize maximum screen size and minimum thickness — the team created a series of widgets that can snap in and out of the Vidget case. The technology measures the small shift in the device when a user presses into the Vidget, detecting the complex interplay between this nonlinear force (the button resistance) and the finger’s muscle force.

“Vidgets” have a wide variety of applications, such as ease of zooming in and out of photos with only one hand, a better gaming experience, creating different music notes as a virtual instrument, and more!

Submissions for SIGGRAPH 2020 Technical Papers are now open! Submit now through 22 January 2020.

Related Posts