Online Forms: That Moment When...
Update: FTC Complaint Against Weight-Loss Marketer For Allegedly Using "Gag Clauses"

The Ethical Dilemmas Of Self-Driving Cars

There have been plenty of articles in the news media about self-driving cars. What hasn't been discussed so much are the ethical dilemmas. What are the ethical dilemmas? The M.I.T. Technology review explored the topic:

"Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?”

If one programs self-driving cars to always minimize the loss of life, then in this scenario the owner is sacrificed. Will consumers buy self-driving cars knowing this? Would you?

Researchers posed this and similar ethical dilemmas to workers at Amazon Mechanical Turk, a crowd-sourcing marketplace for developing human intelligence in computers. The researchers found that while people wanted self-driving cars programmed to minimize the loss of life:

"This utilitarian approach is certainly laudable but the participants were willing to go only so far. [Participants] were not as confident that autonomous vehicles would be programmed that way in reality – and for good reason. They actually wished others to cruise in utilitarian autonomous vehicle more than they wanted to buy a utilitarian autonomous vehicle themselves”

So, few people want to sacrifice themselves. They want others to do it, but not themselves.

There are plenty of ethical dilemmas with self-driving cars:

"Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the card than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chooses one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

You can probably think of more dilemmas. I know I can. Should self-driving car manufacturers offer different algorithms so each driver can use the algorithm they want? Or should all cars have the same algorithm? If the approach is differing algorithms, how will this affect insurance rates? If you drive from one country to another, must drivers adjust their car's algorithm for each country?

Last, I prefer the term, "self-driving" to describe the new technology. While some technology sites and news organizations have used the term "driverless," the term "self-driving" is a more accurate description, and it places the responsibility where it should be. Something is driving the car, and not a person.

And, there may be hybrid applications in the future, where a driver operates the vehicle remotely, as drone operators do today. So, there will always be drivers: somebody or something.

Read the MIT Technology Review article titled, "Why Self-Driving Cars Must Be Programmed To Kill." Share below your opinions about how self-driving cars should be programmed.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.