SciTech

Researchers create more accurate photo-matching method

Credit: Juan Fernandez/Staff Credit: Juan Fernandez/Staff

Identifying whether or not two images are similar may sound like a simple task. While a human would surely be able to match similar images to one another, a computer lacks such inherent visual processing capabilities. Presented with this problem, researchers in the Robotics Institute have developed a new algorithm for identifying “uniqueness” that has yielded strikingly accurate results.

The new techniques were developed by professors Alexei Efros and Abhinav Gupta and research associate Abhinav Shrivastava of Carnegie Mellon, as well as post-doctoral researcher Tomasz Malisiewicz of Massachusetts Institute of Technology. The techniques are vastly different from those of previous image-searching software; instead of finding similarities in broad areas such as colors and shapes, the research team focused on finding “unique” aspects of an image. Put more simply, instead of comparing two images side by side and trying to identify the similarities, the image in question is compared to a large number of other images and their differences are identified.

Gupta said that most programs “simply latch onto the language of an image.... We were interested in latching onto the content of the language.” The researchers also realized that this idea of focusing on content could span across different domains, such as paintings, sketches, and photographs, which previous photo-matching methods struggled with.

To determine uniqueness within an image, the program compares it to a group of randomly selected images. Instead of focusing on the color of individual pixels, where one may lose the image in a transition from sketch to color photo, the system points out an image’s unique qualities by finding which pixels or objects of the image are rarely found in the randomly selected images.

For example, if one searched a painting of the Sydney Opera House, the system wouldn’t focus on traditional domain characteristics like color and texture, or common items like trees. It would instead notice the distinct wave-like shape of the building.

The researchers have already tested numerous applications. One program allows users to take snapshots of where they are and retrieves their location from Google Maps. The computer searches for visually similar images, then hones in on the latitude and longitude of the scene to determine the user’s location. The system can also construct what is called a “visual memex,” a data set that allows users to more easily examine the visual similarities between multiple images. The user is able to search through this graphical data set or even create a movie of the visually similar images.

The Carnegie Mellon research is creating a stir in the technological sphere. Shrivastava went to Hong Kong to present the findings at SIGGRAPH, a computer graphics and technique conference. According to Gupta, response to the project has been very positive and has gained interest from other computer scientists.

One shouldn’t expect to have a program with this algorithm installed on one’s computer soon, however. The program, although incredibly accurate, takes much more processing power and time to compute than current image-searching programs — around 45 minutes for one search.
Despite this long search time, the researchers are proud of their work.

“We didn’t expect this approach to work as well as it did,” Efros said in a Carnegie Mellon press release. “We don’t know if this is anything like how humans compare images, but it’s the best approximation we’ve been able to achieve.”