SciTech

How Things Work: Google Glasses

Young people are notorious for being attached to their electronic devices. Now, the advent of Project Glass by Google — an initiative in wearable electronics — will only serve to strengthen this relationship. While computing devices that can be worn is not a new concept in itself, Google’s influence on the technological world means that Project Glass could be one of the most popular manifestations of an electronic wardrobe.

Project Glass aims to create glasses with a built-in camera, screen, and computer that can provide people with information in real-time about the world around them. An ABC News article stated that a prototype will be released early this year for software developers. The actual glasses are scheduled to be released next year.

According to Daily Mail, the physical structure of these wearable computing glasses is simple: They will look like ordinary glasses with transparent LED screens that can display information overlaying on whatever the user is looking at. These glasses will be designed so that even users who already wear normal prescription glasses will be able to use them, either through screens with prescription lenses or models that can fit over existing eye-wear.

The glasses will be able to connect to the Internet to find information about whatever the user is looking at and can be controlled by the user through head tilts and other programmed gestures. There will also be headphones embedded into its frame that can give information to the user. Advertisers can even transmit information to the device based on the user’s proximity to their stores. Google’s new invention will allow people to be connected to their environment at all times.

One of the most interesting features behind wearable computing glasses is that they have the ability to look up information by using images as search queries. There are many strategies available for doing so. VentureBeat, a technology blog, describes a strategy that involves using specialized search engines to parse the different parts of the picture.

Search engines also can be specialized. A picture can be searched on a text-recognition, landmark-recognition, facial-recognition, or household-object recognition search engine. The results from all these search engines could then be combined and sorted to only display those most relevant to the user. This way, the user not only receives information about the searched image itself, but also about items related to it. For example, a menu image will not only give the name of the restaurant itself, but also directions, reviews, and images of the restaurant.

One of the disadvantages of this type of searching is that it requires a lot of computational power and time. Fortunately, there are alternatives to the method of searching described. A paper published by Google engineers describes a way to make searching with images easier through the use of algorithms designed to distinguish between different types of colors and borders.

These algorithms are good at picking out important colors and shapes in any given picture, allowing the most important parts of pictures to be emphasized. As a result, computational power can be used more efficiently.

Another technique involves the use of metadata. Metadata is information about the time and location of an image when a digital device takes pictures. This metadata can be used to narrow down search results and focus computational power only on relevant search areas.

As the world continues to advance in technology and computing power, wearable computing devices such as Project Glass will soon become commonplace, and could change how people interact with the world around them.