Interview with Dr. Evan Smyth, DreamWorks Animation

By Chris O'Neal -- When the idea that feature-length animated films could be created using computers first emerged, a common reaction was that nothing could replace the richness of images produced by dedicated artists toiling over their drawing boards, that the days of animated epics would be over. In fact, computer-generated films have created a resurgence in animated features — two of the 10 highest-grossing films in history are animated features released within the past three years. Computer-generated images have also contributed to the success of the most successful films of the past decade. Dr. Evan Smyth, the principal software engineer for DreamWorks Animation, producer of Shrek 2, the most successful animated feature ever, is an invited speaker at the 2006 International Supercomputer Conference its Web site to be held June 27-30 in Dresden, Germany. As part of the opening session on New and Innovative Applications, Smyth will give a talk on “Feature Film Animation — New Challenges for HPC.” Here is an abstract for his talk: “DreamWorks Animation has long had a strong commitment to technical innovation to empower their creative filmmakers. A key area of this innovation is in the deployment of large-scale compute clusters used to render the more than 100,000 frames in a feature film. A typical feature length animated film can consume more than 10 million CPU hours of rendering to complete. And with several films in production at DreamWorks Animation at any given time, you can begin to appreciate their enormous computational requirements. This presentation will describe the HPC challenges, solutions and future directions as they relate to the creation of feature length CG films such as Shark Tale, Madagascar and Shrek.” At DreamWorks Animation, Smyth is actively involved in developing a supercomputing strategy as well as exploring various co-processing technologies. Since joining the company in 2002, he has worked on Shark Tale, Madagascar and Over the Hedge. Prior to coming to DreamWorks, Smyth was the software architect at Sony Pictures Imageworks, where he worked on many films including: Stuart Little, Harry Potter and the Sorcerer’s Stone, Spider-Man, I Spy and Hollow Man. Before transitioning to the film industry, he worked at ElectricImage, Alias|Wavefront, ComputerVision and BMW. Smyth holds a Ph.D. from the Massachusetts Institute of Technology, as well as a Master’s Degree from the Harvard University Graduate School of Design and both a B.Sc. and a B.Arch. from the University of Notre Dame. This Q&A first appeared in Primeur, the European newsletter serving the Grid community. Primeur was able to interview Smyth by telephone from his office in the heart of Southern California’s entertainment industry. Q: To start with, can you give us a behind-the-scenes look at how HPC is used in the film industry? Smyth: Sure. There is quite a bit that live-action visual effects and fully computer-generated film-making have in common. Both exploit the “embarrassingly parallel” nature of film itself. That is, there are 24 frames or images that need to be created for each second of action on the screen. Fundamentally, each of those frames can be computed or rendered at any time relative to the others. Thus, in theory (i.e. with an infinite number of machines and an infinitely capable infrastructure) you can render all the frames in a film in the time it takes to render the slowest frame. Of course, we try not to keep that many frames "in flight" at a time and nobody has infinite resources, so a great deal of work goes into making each frame render as quickly as possible, as you would expect. We, along with others in the industry, have been exploiting this type of parallelism for quite a while. At DreamWorks Animation, we currently run on clusters of HP dual-core AMD systems — called our “renderfarm” — which allow us to render many, many frames in parallel. Also, for those of us in animation, the goal is to more dramatically affect the time taken to render a specific frame. A great deal of artist time is spent waiting on the turn-around of a representative frame that they have chosen to iterate on in order to find the lighting settings that will make the entire shot look best. To the extent that they must wait on our computer systems to turn around that work they are less than optimally efficient — and we have a very efficient renderer running on state-of-the-art hardware. People have been interested in this problem for a long time and we have developed techniques to mitigate the problem. Recently, there is a renewed interest in parallelism and people are exploring taking advantage of SIMD (GPUs) in order to speed up shading. Indeed, this is promising and some interesting work has been shown in this area. Still, speedups are 1 to 2 orders of magnitude and always trade off quality. DreamWorks is pursuing somewhere between 3 and 5 orders of magnitude improvement in rendering performance with no quality sacrifice. This would effectively take what currently takes a minute to run in real-time and what currently takes an hour to run at interactive speeds. We are currently focused on using more aggressive HPC techniques of distributed memory parallelism in order to reach those goals. Q: Let’s cut to the chase — what’s your favorite animated film? Smyth: I would have to say Shrek. But my “old school” favorite is Castle of Cagliostro by Hayao Miyazaki. Q: In the days of “old school” animation, the cels were drawn mostly by hand. Today, films are created at the speed of HPC. Is the art of animation getting lost in the rush? Smyth: I hope not and I don’t think so. At DreamWorks, we have two core departments which produce the films once they enter production. There are a number of other departments as well, but Animation and Lighting are at the heart of things. Animation creates the movement that defines much of the geometry in scenes and Lighting adds the light and color to bring the otherwise “gray” surfaces to life. The idea of letting the artists ply their craft is still vital — we, as the developers of technology, don’t want to rush them. The role of HPC in creating the art is to allow the artist to set a parameter and see what results without having to wonder how long it will take. Their job, to a great extent, is exploring all the possible solutions. The better they are at this, the quicker they can converge on a good solution. They set a variable, process, and then get a result back. It’s clearly critical to artistic effectiveness to get the results back as quickly as possible. If they change the lighting in a scene and they can see the results immediately, they can either make further changes or move on to the next one. As I mentioned, rather than rushing the process, we are using HPC to remove the wait for the results. The artists still have as much time to work on their craft, but they spend less time waiting. Q: It seems that the leading animation studios put out one film a year. What goes into creating that film? Smyth: We actually produce two a year. We first did this in 2004 with Shrek 2 and Shark Tale. This year it’s Over the Hedge and Flushed Away. No other studio does this. We always have more than two films in production. It’s in the second half of production when the demand for HPC becomes large. The first half of the film-making process is spent writing the script, developing characters and tightening the story. Once we go into production, the manpower and computing demands go way up. It’s this part that I can affect most. One of our recent films required 12 million CPU-hours, which is a pretty big demand — especially when you consider we have more than two films in production at the same time. This vast amount of computer time was needed to iterate toward the best lighting settings that ultimately went into the final images which appeared on the screen. One part of my job is to see how we can make the rendering software faster and more efficient. Q: Can you tell us a little more about your HPC systems? Smyth: We have two distinct clusters, one in Glendale in Southern California and one in Redwood City in Northern California. The clusters are the result of a collaboration with HP and AMD. The systems are built around a dual-processor, multi-core architecture. The two clusters are linked by a fast connection, so we can render remotely from one site to the other. It’s pretty handy. Q: So, as the principal software engineer for DreamWorks, what’s in your portfolio? Smyth: As a software engineer, I developed specific capabilities for requested features, such as fur effects or surface deformations. Then I began to supervise software development, and now I look ahead a little further. I focus not just on what we do and how we could do it 10 or 20 percent faster, but also look at things that are more radical, such as real-time or interactive rendering. For example, what if the artist could drag a slider and change the intensity of light in a frame and instantly see how that would look in the final quality render? That’s not at all possible right now, but we’re working on it. That would not be an incremental change, such as a speedup. We would enable completely different workflows, rather than the discrete phases we now have. It’s hard to know what it would mean for our business until we actually do it — what is clear is that that level of performance would allow us to organize our workflows much more efficiently. My role is to lay the foundation to get us to that level. It’s exciting because it seems we are at a time when the hardware is sufficiently powerful where we can actually contemplate doing this. Q: So, how did you get into the movie business? Smyth: I started out in architecture — buildings — and mathematics. I became increasingly interested in computer-aided design and free-form surface modeling, which led me into industrial design. I began to develop software for that area. External factors then led me to move from Boston to Los Angeles, where I found interesting opportunities in the software industry supporting the film industry. It just happened. One difference I found is that in each industry, quality means something different. Q: That’s an interesting observation. Can you elaborate? Smyth: Well, with automotive surfaces for example, you’re concerned about where surfaces join. There needs to be some level of mathematical continuity. If there’s a disconnect, you’ll see a “kink” in the reflection lines of surface. In animation, you may care about kinks — or not. If the kink is in a surface covered with fur, you may not need to care at all. But the surfaces do generally need to be connected — you can’t have movement causing the forearm to disconnect from the elbow. It doesn’t necessarily have to be “right” mathematically, but it has to look plausible. It’s much more subjective. Conversely, industrial design surfaces typically do not deform at all where as virtually all of our animated surfaces do — making the connectivity problem quite a bit more complex in a different sense. Consider Prince Charming riding his horse in Shrek2. There are a lot of techniques for simulating hair motion, but most are intended to be used at walking speed. Trying to adapt these to a higher speed caused a lot of the techniques to fall apart. So, we took some liberties, which we can do as long as the result looks good. This isn’t the case in airflow simulations in aircraft design, where you can’t use tricks. Our work is not about physical accuracy as much as it is about getting the look we want. For example, a director may think a splash is too big. Our plausible fluids model allows the artist to control how the scene plays out, say by reducing the splash. Q: That almost sounds like a contradiction. I bet most people would say that computer-generated films have a much more realistic look than old-style animation. Smyth: Realism is secondary — well-grounded techniques and the physical approach are fundamental. Most people don’t have an in-depth sense of what a splash would really look like in detail. If we have the general details right, this allows more control like more splash or less splash. If we do a pretty good job of simulating something accurately, we save time for the artist but allow for control. That’s the key. Q: Can you give us an idea of what you’ll cover in your talk at ISC in Dresden? Smyth. Sure. After each film, we make a “making of” segment describing the technology behind the film. I’ll show the segment for our newest film, Over the Hedge, which opened May 19 in the U.S. Then I’ll present a short 10-slide overview of animation, telling how we make a film. After that, I’ll move on to what we’ve done in the HPC arena, the results we’ve seen and how HPC affects what we do. I’ll conclude by looking at how we are moving forward in some of the ways I’ve mentioned. Q: One last question. How has your work affected how you view a movie? Smyth: Surprisingly little, actually. I like to know how things are done in an academic sense, such as in Shrek 2 how the mud was simulated or the hair was done for Prince Charming. But for me, good technology in a film is technology that doesn’t take you out of the film. I prefer to watch it as a film, not as a showcase of technology. I love the process of making them and solving the challenges, but the ultimate purpose is to make a great film, one that tells a story that people will like. When I go to the theater, I’m just trying to enjoy the film. Supercomputing Online wishes to thank Jon Bashor (LBNL Computing Sciences Communications Manager) and Ad Emmen (editor of Primeur) for their help with this story.