© 2021 Electronic Arts
As you, our readers, know by now, the SIGGRAPH 2021 conference featured an incredible series of Talks. One such Talk was “Swish: Neural Network Cloth Simulation on ‘Madden NFL 21′” from a team of engineers with Electronic Arts (EA). For the session, Dr. Christopher “Chris” Lewin, James Cobb, and James Power walked through Swish, a real-time, machine learning-based cloth simulation technique used to generate realistic cloth deformation and wrinkles for NFL player jerseys in EA’s “Madden NFL 21”. Here, we interview the engineers to learn about how they developed this first-of-its-kind tool to be featured in a shipped game.
SIGGRAPH: Share some background on your Talk. What inspired the development of Swish for the most recent “Madden” game?
James Cobb: With the arrival of the next generation of consoles (PS5, XBSX), we were exploring new opportunities to improve the visual fidelity of “Madden”. A key theme in many discussions with creative leadership centered around objects in motion. “Madden” has historically had some impressive-looking still frames, but seeing the characters in motion left more to be desired. We spent time studying slow motion captures of real-world football players, cross referencing with our game, and it was clear that our characters on the field lacked a certain dimension of visual realism — particularly in our jerseys. Football jerseys exhibit a lot of complex movement as they interact with the various underlying pad combinations and a wide variety of body silhouettes for different NFL positions. (For instance, defensive lineman are sized quite differently than a quarterback or kicker.) This is exacerbated by the fact that football players can end up in very extreme poses. Having relied on traditional smooth skinning, our jersey animations felt “painted on”, lacking the variety in visuals and movement of the real-world material.
We needed high-resolution deformation in the jersey to showcase all the different folds, creases, wrinkles, etc., but we wanted them to animate in a believable way as our characters transitioned between poses. Adding a real-time cloth simulation to our jerseys was explored, but our estimates at the time showed that we wouldn’t be able to achieve the desired fidelity within performance budgets, for 22 characters in view. We needed to explore a new approach.
SIGGRAPH: Tell us about the process of developing Swish, especially as it relates to neural simulation.
Chris Lewin (CL): Our biggest technical challenge when developing Swish was dealing with normal maps. These are texture images that we use to suggest extra geometric detail in the cloth’s appearance by modifying the shading of the material. Normal maps are a ubiquitous feature of modern games that allow objects to look very detailed without having to render huge numbers of triangles. However, they are actually quite painful to work with. Normal maps are typically generated by shooting rays between a low- and high-resolution mesh, but there are many ways that this can go wrong that can give results that are very objectionable to the player. One common one is that if there are any reflexive folds in the cloth (wrinkles where the cloth folds back over itself), it can introduce noise in the resulting normal map. This kind of fold happens all the time in cloth simulations, particularly when the character bends forward, so we had to carefully limit the inputs of our method to prevent these situations from arising.
Another problem with normal maps is generating them with a neural network. Our technique generates both mesh deformations and normal maps for each character, and the normal map part was substantially more difficult to get right. This may seem like a strange statement given that generating images has been one of the most successful uses of neural networks in recent years — but the reality for us is that generating images directly with a convolutional neural network is far too slow for our real-time use-case. To deal with this, we took inspiration from 2D sprite-based animations, and modified our neural network to output a very simple code of around 10 real numbers. That is then used to look up in a nearest-neighbor database of pre-generated images. This makes the neural network evaluation very cheap in terms of processing power — even cheaper than a heavily optimized, real-time cloth simulation.
In my opinion, the best part of Swish is the subtle realism we can lend to the clothing in the game. Including the pixels in our animated normal maps, our simulations are perhaps 10 times more detailed than other game cloth simulations. Football jerseys are a relatively simple test case for this technology because they are tight to the body, but we hope to be able to show more interesting and varied garments simulated using Swish in the future.
SIGGRAPH: How has this solution transformed game design at EA? What problems did it solve?
James Power: Many games at EA are chasing similar goals when it comes to believable cloth simulation. Whether it be pushing for authenticity in sports titles — attempting to capture both the power and subtlety of human kinetics, or cinematic realism in narrative-driven games, cloth simulation has always been a difficult problem space for art direction, design, and engineering to balance. You can strive for extreme quality with high-resolution clothing meshes and highly detailed simulations, and still end up with a solution that takes too much of your frame, or flip to the other end of the spectrum and get something that just doesn’t look very realistic. Swish allows teams to optimize both sides of that problem by offloading the simulation into the neural network and producing very high-quality deformation given the quality of the input meshes. For titles with lots of characters, like sports, this is a tremendously powerful tool and I can easily see it being extended to crowds or other titles within the company.
The next steps for the tool is to reduce the barrier of entry to developing content with it by developing a streamlined pipeline and distributing the workload of the simulation to reduce iteration times. We aim to allow other teams within the company to easily author content and experience the impact it can make in their titles first-hand without the oversight of the original developers.
Machine-learning solutions are already making a big impact in the games industry, and I feel that Swish is an excellent application of this technology. Swish solves a traditionally expensive problem while not requiring dramatic changes in the input content (no re-authoring of our source meshes was needed), which allows teams to experiment and improve games without causing too much disruption in the development process that is already established. Technologies that unlock the potential of artists and other developers without causing that kind of disruption have great value and, as the technology becomes more mature, I could see more applications of similar technology in other areas of game development. That’s exciting.
SIGGRAPH: You presented an on-demand Talk and participated in a Q&A about real-time rendering during SIGGRAPH 2021. What was it like to present virtually? What was your favorite part of participating in the conference?
CL: Presenting at SIGGRAPH has been an ambition of mine since my student days, so doing this Talk was, in some sense, satisfying a long-term goal. It wasn’t quite like I had imagined years ago — presenting a short summary through a Zoom call is quite different to speaking on a stage — but it was still interesting and empowering to have other people engaging with our work. The best part of in-person SIGGRAPH is, of course, meeting and hanging out with one’s peers, and having discussions on Discord with the same people has a different and sometimes much weirder energy. It’s something I will definitely remember in future years, although I hope it’s not a format we have to go back to again!
SIGGRAPH: What advice do you have for someone looking to share their own innovations at a future SIGGRAPH conference?
CL: When you are working on a weird project like ours that blends different disciplines — physics, rendering, animation, ML — it is easy to get intimidated by the level of polish and rigor of the content presented at SIGGRAPH. But, actually, the more strange your work is, the more interesting it will be to people — particularly if it comes with intrinsic evidence of value, such as having shipped as part of a game or film. I am often much more interested in a Talk that shows a clever way to sidestep a problem than a Technical Paper that uses complex math to solve that problem directly. So, my advice to people wondering whether they should submit their work is to just do it. It doesn’t take long and your work is probably better than you think.
Interested in sharing your research or other projects at SIGGRAPH 2022, 8–10 August? Review open programs and related requirements here.
Dr. Christopher “Chris” Lewin is a senior physics software engineer at Electronic Arts (EA) SEED. His interests include real-time multiphysics simulation, applications of machine learning to physics, and the crossover between physics and rendering. Before working as a researcher at SEED, he was an engine developer for EA’s Frostbite engine, where he mainly worked on cloth simulation.
James Cobb is a senior rendering engineer at Apple. He currently works on the Platform Architecture team conducting graphics research for exploratory GPU features. His interests are wide ranging in the field of graphics and game development, but the focus of his career has been primarily on character rendering and performance optimization of real-time 3D applications. Before joining Apple, James worked at Electronic Arts (EA) where he was the graphics lead on the sports title “Madden”. When James is not working on new tech or reading the latest advancements in rendering, he enjoys spending his time with his wife and two children in the sunshine state of Florida.
James Power is a senior rendering engineer at Electronic Arts (EA) and currently works for Bioware. Previously, he was focused on character and hair rendering at EA Tiburon for “Madden NFL”. He works remotely from the (sometimes) sunny town of Lunenburg, Nova Scotia and joined EA in 2018.