Self-Driving Ubers navigate

Credit: Paola Mathus/ Credit: Paola Mathus/

Since allowing customers to hail self-driving cars in mid-September, Uber’s futuristic vehicles have become a common sight around Pittsburgh. With their roofs chock full of cameras and spinning sensors, they’re pretty easy to spot. But what are all those gadgets actually doing?

The most distinctive part of Uber’s self-driving technology, and the cornerstone of its navigation, is the Light Detection and Ranging (LiDAR), or what most laymen would refer to as the spinny part on top. The unit is meant to map the car’s surroundings in 3D with minute detail so the car is aware of any objects surrounding, it like parked cars, buildings, pedestrians, or debris in the road. LiDAR works by firing a laser at an object and measuring the time it takes for the light to bounce back to the sensor on the LiDAR unit, thus mapping the location of the object. Because light travels at a constant and known speed, it’s easy to calculate the distance to the object once the light’s travel time is known.

The LiDAR unit on top of Uber’s self-driving cars fires 1.4 million laser points per second to build a picture of the car’s environment. It is placed on top of the car so the lasers can have as far of a reach as possible, unobstructed by pedestrians or other cars. The car also has four less powerful LIDAR units placed on the front, rear, and sides of the car to cover blind spots.

LiDAR’s main benefit is its accuracy. Before the advent of LiDAR technology, radar, which uses radio waves instead of light waves, was most commonly used for detecting and mapping objects. At 500 feet from its source a radar beam disperses to a width of 150 feet, while a laser beam is only 18 inches wide at the same distance. This makes a huge difference for self-driving cars, where being a few feet off on an object’s location can lead to an accident.

Radar also finds its place in self-driving cars, however. While LiDAR is accurate in detecting the shape and details of objects, its Achilles heel is accurately monitoring the speed of surrounding vehicles. Uber’s cars are outfitted with two radar units on its bumpers that provide 360° coverage and detect the speed of other vehicles on the road in real time. The data from these radar units also serves as a crosscheck for the LiDAR data. Automobile technology relies on redundancies and cross checks for safety, since a malfunction can have tragic results.

The wave-based object detection technology is supplemented with 20 cameras placed around the car. The colored camera placed just below the largest LiDAR unit on top of the car plays the vital role of putting the LiDAR map into color so the car can see traffic lights. The cameras around the car pick up far more data than the other sensors, including color and texture. The problem comes with interpreting the overwhelming amount of data the cameras pick up.

With all of these sensors telling the car exactly where it is, it still needs to figure out where it’s going. That is where the rest of the technology cluttering the top of the car comes in. The back of the car’s hood is covered with a collection of antenna that allows it to position itself via the Global Positioning System (GPS). The car receives precise information from at least three of the 30 satellites orbiting the Earth as part of the GPS network on its exact position and current time, and uses the overlap of the three satellites’ ranges to pinpoint its location. This method, called trilateration, is accurate within several meters and is effective in most scenarios.

However, several meters can make a world of difference for an automated car driving without human supervision. Uber has combated this problem by restricting their self-driving Ubers to certain routes and having them supervised by a vehicle manager behind the wheel at all times.

Google, which is also developing a self-driving car, has taken a different approach. Google has pre-mapped all of the 700,000 miles of road it allows the cars to drive on with exquisite detail, including everything from potholes to curb heights. When a self-driving car is actually on the road, its GPS position is matched with sensor map data previously collected in the same spot and adjusted accordingly.

Both Google and Uber’s strategies for dealing with the positioning problem have their flaws. Uber’s strategy limits the car’s flexibility and keeps it from being fully autonomous of a human driver, while Google’s strategy restricts cars to mapped roadways. And while 700,000 miles may sound like a lot, there are over 4 million miles of public roads in the U.S.

Uber’s self-driving cars combine an amazing array of technology to navigate their surroundings, and even more technology to make decisions based on the information they detect. While fully autonomous cars may still be far off, the technology being used today drives home the fact that we’re living the future.