Improving animation: Details of facial expressions are key

Robotics Institute graduate student Laura Trutoiu helps animators make facial movements seem more real. (credit: Kate Groschner/Staff) Robotics Institute graduate student Laura Trutoiu helps animators make facial movements seem more real. (credit: Kate Groschner/Staff)

In the past 10 years, eight computer-animated films have been deemed Best Animated Feature at the Academy Awards, according to the Academy Awards Database. Computer animation has become more realistic in the past decade, and some of the work contributing to those advances has been done here at Carnegie Mellon.

“I am interested in the small motions of the face that can add a high degree of realism and improve the quality of animations,” said Laura Trutoiu, a fourth-year Ph.D. student in the Robotics Institute. Her research focuses on computer graphics with an emphasis on facial animation. Her goal is to model the subtle facial movements that bring to life animated characters.

Trutoiu’s research is interdisciplinary, and thus calls for collaboration from multiple fields. Her Ph.D. adviser, Jessica Hodgins, is a professor in the Robotics Institute and the department of computer science. Trutoiu also collaborates with Jeffrey Cohn, a professor of psychology at the University of Pittsburgh, and Iain Matthews, a senior research scientist at Disney Research Pittsburgh.

Trutoiu holds a bachelor of arts in computer science from Mount Holyoke College in Massachusetts. She started conducting research in a virtual reality lab at Mount Holyoke. “That’s when I got hooked on research,” she said.

Movement has always been a common theme in Trutoiu’s research. As an undergraduate summer intern at the University of Utah, she researched the illusion of self-motion: How to induce the feeling of movement in a person who is in a driving simulator.

At Carnegie Mellon, Trutoiu started looking at how Parkinson’s disease patients moved with or without their deep brain stimulators turned on. These neurostimulators are devices implanted in the patients’ brains that control tremors. Trutoiu measured the patients’ movements, such as how much the spine swayed when the patients were walking, to quantify the improvements that the brain stimulator provided.
Later, Trutoiu’s research shifted to focus on the generation of facial animations.

Trutoiu explained that the process of creating a computer-animated movie, such as Toy Story, has a few steps. First, a sketch artist draws snapshots of the main scenes in the movie. The characters are then modeled in a computer using three-dimensional representations. Textures and lighting are applied to the images, adding realism to the surfaces.

Then the animators come in. Trutoiu works with the animators, using her research to complement their work. She uses videos and images of humans to track and measure facial movements. She is then able to generate robust algorithms that predict how different parts in the face should be moving.

“When we see a computer-generated animation, we as humans can pick up immediately if something is wrong,” Trutoiu said, emphasizing the importance of accurately modeling facial expressions.

Currently, Trutoiu studies the movements associated with smiling.

“My categorization is between spontaneous and non-spontaneous smiles,” she said. A spontaneous smile is triggered automatically, while a posed smile has a grin-like appearance.

Whether a smile is perceived as genuine depends on how fast the corners of the lips move up, how long the smile is kept at the maximum amplitude, and how fast it is released.

Trutoiu uses data obtained from high-resolution videos of moving faces. These videos show 250 frames per second and appear in slow motion, as opposed to a normal video, which has 30 to 60 frames per second. Appearance models then look at the face as a whole, check for localized video information, and observe shape and appearance changes, all the while taking copious measurements over time.

“Over time, the computer builds a shape model and an appearance model,” Trutoiu said. “It uses those to figure out what is the expected position for that corner of the lip in the next frame.”

The proper facial expressions are generated by the right position, velocity, and acceleration of specific points on the face, producing a more realistic animation.

“The final goal of this research is to be able to tell animators or artists, when you have a sequence of someone smiling, this is how the cheeks should be raised, how the lips should actually stretch,” Trutoiu said.