Research allows for improvement of motion-tracking
Have you ever played your friend in Wii Sports and blamed a missed serve or a bad shot on the lag of the video game? Robert Xiao, a Ph.D. student in Carnegie Mellon’s Human-Computer Interaction Institute (HCII), Chris Harrison, a recent Ph.D. graduate from the HCII who will be joining the Carnegie Mellon faculty next year, Scott Hudson, a professor in the HCII, and Ivan Poupyrev and Karl Willis of Disney Research Pittsburgh, have developed a motion tracking technology called Lumitrack, which will not only significantly decrease the lag that is common in many motion tracking technologies today but also offer more precision and lower cost.
According to Xiao, the project began when he and his research group discovered that mathematical problems could be encoded in a way such that a very small portion of them would be unique across the entire pattern, known as the m-sequence. “We got the notion that this could be used for tracking somehow,” Xiao said. “You could display this pattern in some way and then identify where you are on it.”
Lumitrack consists of two main components: projectors and sensors. The system projects a unique m-sequence, which is essentially a big barcode, onto a linear optical sensor somewhere in the field of view. The sensor then picks up individual pixels of this barcode pattern and decodes it into the position within the pattern. For motion tracking in three dimensions, an additional projector and sensor pair are added to the system.
“So basically we set up an image from the projector that is this barcode, in two directions — one barcode in one direction and a different barcode in a different direction —and the sensors are sitting in this field with x and y sensors that can just pick up that barcode,” Xiao said. The system is also extremely precise: “We can track the position of the sensor down to 1.3 millimeters, at the worst,” Xiao said.
The design of this system allows the sensors to pick up locations in a short amount of time because light can be tracked quickly. Existing motion tracking technologies, such as Nintendo’s Wii Remote or Microsoft’s Kinect, use full cameras on their sensors, which result in a lot of extra processing time.
“Lumitrack, by comparison, is quite simple. Because of the use of the barcode, we can just look up the position in the barcode. It’s basically just a simple table look-up,” Xiao said.
The fact that Lumitrack uses one-dimensional sensors means that it does not have a lot of data to deal with.
“We process the data using a very fast, very efficient algorithm, and then ship it off. The simplicity of the system ends up being its greatest strength,” Xiao said. This simplicity also results in a lower cost overall, since Lumitrack requires only a few sensors and projectors.
Other applications of this system include gesture control and computer-generated imagery (CGI) for films. In the case of gesture control, Lumitrack has the potential to perform even better than other current technologies, such as the Leap Motion Controller.
As for CGI, very expensive and large systems are usually required. However, using Lumitrack, CGI would be achieved using much less equipment and at a lower cost, making it more accessible to ordinary people.
Xiao and his team recently attended the 2013 Association for Computing Machinery Symposium on User Interface Software and Technology in Scotland. “I got an opportunity to present my research there to the wider community and get them excited about the possibilities that we could have,” Xiao said. While Lumitrack is still a research prototype at the moment, commercial vendors interested in creating a product have reached out to the Carnegie Mellon group. Xiao predicts that in three years, Lumitrack could be commercialized, transforming motion tracking systems into real-time experiences.