‘Interactive Style Transfer to Live Video Streams’ Creates New Possibilities for Artists

by | 5 November 2020 | Conferences, Real-Time

The painting is courtesy of Zuzana Studena, used with permission.

After an inspiring SIGGRAPH 2020 virtual experience and an exciting Real-Time Live!, we caught up with Ondřej Texler who presented during the show. Texler is one of the creators behind “Interactive Style Transfer to Live Video Streams,” the SIGGRAPH 2020 Real-Time Live! project that received Best in Show in a two-way tie, and the Technical Paper “Interactive Video Stylization Using Few-shot Patch-based Training.” Here, Texler shares his team’s reaction to being awarded Best in Show, how the demo was developed, and how this project helps artists create their work. Also, Texler touches on the most exciting aspect of his team’s Technical Papers research and encourages others to submit their work to the program.

SIGGRAPH: Congratulations on receiving Best in Show in a two-way tie at SIGGRAPH 2020’s Real-Time Live!. What was your reaction to receiving that honor?

Ondřej Texler (OT): Thank you! We were, of course, delighted and surprised. To be featured in the SIGGRAPH Real-Time Live! session was always our dream. We know how difficult it is to be accepted to the show, and thus being able even to participate was an accomplishment on its own. You can imagine how we felt when the committee announced that we also won the Best in Show award!

SIGGRAPH: Tell us about your experience presenting from Prague during the Real-Time Live! livestream.

OT
: I like to make presentations at conferences, and I was looking forward to experiencing my “six minutes of fame” on stage in front of the audience. But I enjoyed the virtual event as well; everyone was doing all they could to make everything go as smoothly as possible and the whole virtual experience was pleasant and undoubtedly unique.

SIGGRAPH: Let’s get technical. How did you develop “Interactive Style Transfer to Live Video Streams”? What inspired the project? How many people were involved? How long did it take?

OT: At our Department of Computer Graphics and Interaction at CTU in Prague, we are not only focused on interesting research problems, but we also like to develop practically usable tools. Style transfer to videos is an interesting problem from the research point of view, and there also is a high demand from artists to see this problem solved, so their interest was an essential inspiration for us.

It is difficult to tell precisely how long it took to develop this project because our eight-person team built it upon our previous research experience and knowledge. To invent the method, make the prototype, and get the paper accepted, it took us roughly one year.

SIGGRAPH: In contrast to previous style-transfer techniques, this approach does not require lengthy pre-training processes or large training data sets. Why was that an important aspect of this project’s development?

OT: Imagine you would like to stylize a video for which it would be challenging to collect a sufficiently variable training dataset, such as the interior of a medieval castle. You do not want to paint every frame by hand, so you provide one — or a few — stylized keyframes. Then, you would like to see your style transferred to the rest of the sequence in a semantically meaningful way. For example, bricks depicted using the same red brushstrokes as in the exemplar. You also would like to see an arbitrary frame in the sequence stylized quickly. You do not need to wait a long time to get the entire video stylized. All of those practical requirements were highly challenging for previous methods, and thus we tried to address them in our framework.

SIGGRAPH: How does the Interactive Style Transfer network adapt to style changes within seconds?

OT: In our training strategy, we take into account the way artists create their artworks. Painting is usually a slow and incremental process. Thanks to this fact, the network also can follow the artist and gradually improve. Our method’s great advantage is that it allows us to parallelize the painting creation with training, and to amortize the time that would otherwise be required to train the network for the already existing style exemplar.

SIGGRAPH: How do you envision this interactive platform being used in the future? What problems does it solve?

OT: We believe our approach opens the possibility to create stylized movies and live interactive sessions or installations, where the artists can make a live version of their artworks that adapt to changes in the captured environment. Our approach is applicable not only in artistic scenarios but also in photorealistic settings such as living virtual makeup or the creation of roto masks for live performance.

SIGGRAPH: At SIGGRAPH 2020, your team also presented the Technical Paper “Interactive Video Stylization Using Few-shot Patch-based Training.” What is most exciting about this research?

OT
: The most exciting moment was that we invented something we initially thought wouldn’t be feasible. Once we managed to make it work properly, we immediately started to see our framework’s numerous practical use cases. The moment of achieving something that opens new possibilities is always fascinating.

SIGGRAPH: What advice would you share with others considering submitting a Technical Paper to SIGGRAPH?

OT
: Follow your passion, make your dreams become a reality, and submit them to the SIGGRAPH.  

SIGGRAPH 2021 Technical Papers submissions open soon. Visit the SIGGRAPH 2021 website for the latest information.


Meet the Team

Ondřej Texler is a PhD candidate and researcher at CTU in Prague and is supervised by Prof. Daniel Sýkora. He received his BSc and MSc degrees in computer science at the same university. His primary research interest lies in computer graphics, image processing, computer vision, and deep learning. He specializes in generating realistical-looking images according to certain conditions or examples. In recent years, Texler worked with Adobe Research and Snap Inc. on several research projects. Currently, he collaborates with the NEON.life team at Samsung Research America.

Daniel Sýkora is a professor at the Department of Computer Graphics and Interaction CTU in Prague where he leads the development of algorithms for artists. Sykora collaborates with Google, Snap, Adobe, and Disney. He received scientific awards including Günter Enderle Best Paper Award and the Neuron Award for Promising Young Scientists.

Related Posts