Disney, CMU pioneer new motion capture software
Filming for The Dark Knight Rises may be capturing everyone’s attention in Pittsburgh, but that was not the only movie magic occurring in the city this summer. Scientists at Disney Research, Pittsburgh (DRP) and Carnegie Mellon have been working together on technology to improve the accuracy of motion depicted in films and to enhance moviegoers’ experience watching films.
DRP developed a new tactile technology, called Surround Haptics, which would make it possible for video gamers and film viewers to experience a variety of sensations, including the jolt of a collision and the feel of bugs across their skin. Surround Haptics, which is going to be used to enhance a high-intensity driving simulator game, allows players seated in a chair fitted with inexpensive vibrating actuators to feel everything from road imperfections to car collisions.
“Although we have only implemented Surround Haptics with a gaming chair to date, the technology can be easily embedded into clothing, gloves, sports equipment, and mobile computing devices,” Ivan Poupyrev, senior research scientist at DRP, said in a press release.
DRP and Carnegie Mellon’s Robotics Institute have also developed technology that will better allow computer animators to more realistically depict complex facial expressions. To do so, researchers attached 320 reference marks to a professional actor, and they recorded facial motion-capture data from the actor while he expressed a variety of emotions and actions.
“We can build a model that is driven by data, but can still be controlled in a local manner,” J. Rafael Tena, a Disney research scientist, said in a press release. The researchers analyzed the motion-capture data to divide the face into 13 regions; as a result, computer animators can now manipulate the regions to create the facial poses they desire.
Researchers have also improved motion-capture techniques so they are no longer confined to a closed studio. In traditional motion capture, cameras are attached to an actor in a closed studio to record his or her motions. Animators can then use that data to create computer-generated effects or creatures, such as Gollum from the Lord of the Rings films.
DRP and Carnegie Mellon’s new method uses body-mounted cameras to estimate the position of the person in relation to their surroundings. This method allows motion capture to happen outside of a studio in practically any location — even over large distances outdoors.
“This could be the future of motion capture,” Takaaki Shiratori, a post-doctoral associate at DRP, said in a press release. As video cameras become ever smaller and cheaper, “I think anyone will be able to do motion capture in the not-so-distant future,” he said.