SciTech

Self-driving vehicles raise multiple ethical quandaries

Credit: India Price/Online Editor Credit: India Price/Online Editor

As self-driving cars move rapidly from fantasies into our current reality, there are ethical questions surrounding their use that we haven’t had to consider until now.

Machines that make decisions have been around for quite a while, but civilians have not had access to autonomous machines with the destructive capacity of automobiles. When cars collide with either stationary objects or other cars, there is usually severe damage done to everything involved in the crash. Since these situations arise for unpredictable reasons, there has to be an ethical framework for the cars to make decisions and for who is liable when the cars do damage.

The first question that has to be answered is who does the car prioritize when it has to decide who lives and who dies.

Ethically, this is a complex question.

A purely consequentialist car will decide what course of action is most likely to save the most people, but this calculation is affected by many factors.

It may never be possible to preemptively tell code exactly who is inside or outside the car, how likely that person would be to survive a collision, and many other relevant pieces of information within the time that the car could respond to the crash.

A car that is more ethically focused on the decision rather than the result would have to have a consistent process for determining its reaction to each potential crash. Any process will be subject not only to ethical skepticism but also the individual needs of each person and each collision.

Someone’s biography and relationships to the people around them might also become relevant in ways that are hard or impossible to get the car to respond to. If parents were dropping their kids off at school and those kids run into the street in front of that car, parents would probably want their car to avoid their own kid and a person possibly could make that reaction, but a car likely cannot.

Most manufacturers of self-driving cars have decided to prioritize the driver’s safety in every situation. The most obvious justification for this is that if a consumer has to choose between two cars and one is more likely to kill them, they will almost certainly choose the one that they are less likely to die in.

This is a market force and not an ethical decision, but it has some merit. If everyone on the road was in a car that prioritized its own driver’s safety, cars would be very predictable in their responses to unexpected obstacles and people on the road would more likely be safe in accidents because everyone’s car is preserving their own safety. In collisions with pedestrians, the pedestrian is in danger, but pedestrians move very slowly compared to cars and are easier to safely avoid entirely.

Either way, this process has to be universal so cars do not have individual decision-making processes. Otherwise, cars would be unpredictable, leading to more collisions.

Further, corporations have a profit incentive, not a public benefit incentive, so the decision-making process is in better hands if it is universal and made outside the corporate environment.

Another question that surrounds self-driving cars is liability; if a self-driving car gets into an accident, who is responsible?

Pugwash was unanimous that collisions between fully autonomous vehicles should be the responsibility of the companies who designed the driving algorithms for the cars involved.

Firstly, their processes for driving are what failed and caused the collision.

Secondly, this gives those companies easy access to data points that they can use to improve their algorithms, and legal pressure and investigations could ensure increased safety of self-driving cars.

Semi-autonomous vehicles were more of a gray area. Semi-autonomous vehicles often have instruction manuals telling the driver to be aware and to take control of the car if necessary. This could lead to more user error when people crash a car that would have otherwise gotten itself out of the situation, but it also means people are responsible for being the failsafe for their own vehicle.

Pedestrians can also cause collisions by being negligent in how they cross the street. The current process of determining fault is probably the best way to deal with this.

Changes in technology often introduce a new context for ethical questions, and automating something with as much potential for damage as driving cars makes those questions very relevant.

Making sure these questions get answered, or at least acknowledged, is key to ensuring that the future of self-driving cars is pursued responsibly by both car companies and car owners.

Student Pugwash is a non-advocacy, educational organization that discusses the implications of science. This article is a summary of last week’s discussion on self-driving cars.