Researchers develop depth sensing camera prototype
The popularity of depth sensing cameras has skyrocketed recently due to the increasing prevalence of video games, yet these cameras could have many other applications, making them ideal for further study. Researchers at Carnegie Mellon University and the University of Toronto teamed up to create new 3-D imaging technology that can work in bright light — a trait that many other depth-sensing cameras and 3-D sensors lack.
The research team included faculty members from both Carnegie Mellon and the University of Toronto, including Srinivasa Narasimhan, an associate professor of robotics at Carnegie Mellon University; William Whittaker, a professor of robotics at Carnegie Mellon University; and Kyros Kutulakos, a professor of computer science at the University of Toronto. The team also included Supreeth Achar, a doctoral candidate in robotics at Carnegie Mellon University, and Matthew O’Toole, a doctoral candidate in computer science at the University of Toronto.
Depth cameras project a pattern of dots and lines over a scene in order to capture it. By doing this, it is possible to calculate the 3-D contours of a scene by analyzing the pattern deformation and the time it takes the light to reflect.
However, patterns are usually washed out due to the low-powered compact projectors commonly used in depth cameras. In order to address this problem, the research team created a mathematical model that eliminates extraneous light by programming a camera to work more efficiently with its light source.
“We have a way of choosing the light rays we want to capture, and only those rays,” Narasimhan said in a university press release. “We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor.”
Using the model, a prototype was built that synchronized a laser projector with a rolling-shutter camera, which allows the camera to only detect points illuminated by the laser. “Even though we’re not sending a huge amount of photons, at short time scales, we’re sending a lot more energy to that spot than the energy sent by the sun,” Kutulakos said in a university press release. “The trick is to be able to record only the light from that spot as it is illuminated, rather than try to pick out the spot from the entire bright scene.” Using this technique, the camera works well under bright light and is also energy efficient.
There are many applications for this technology in a wide variety of fields. In the realm of entertainment, the camera could allow videogames to be played in environments with bright light, such as outdoors, without glare.
Other applications include medical imaging, where the camera could be used to visualize skin structures, or manufacturing, where it could help people observe shiny materials.
The research could also have major implications in the realm of space exploration. The camera could be used for extraterrestrial robots, visualizing dark environments such as craters, or in polar regions of the moon, where it can be used to reduce glare.
In a university press release, Whittaker commented on the value of the sensors for use in outer space. “Low-power sensing is very important,” he said. “Every watt matters in a space mission.”
The research was supported by the National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, and the U.S. Army Research Laboratory. The team presented their findings on August 10 in Los Angeles at SIGGRAPH 2015, the International Conference on Computer Graphics and Interactive Techniques.