As soon as new technologies come along, the public immediately dials up their natural fear of all things technologic, and start imagining all the things that can go wrong. This technophobic trend has naturally been extended to the concept of the self-driving car. As soon as the possibility of self-driving cars entered the public consciousness, people started imagining assuming that those evil robot cars would make buggy, counterintuitive decisions in every case, causing the deaths of hordes of innocent citizens on a daily basis. (Totally unlike, say, the hordes of innocent citizens killed by human drivers on a daily basis.)
The current trend along these lines is to place self-driving cars in “ethical” dilemmas, to illustrate their inherent danger to their human occupants and obstacles… this Phys.org article is a standard version of this trend.
But just about every scenario imagined has been based on a false dilemma, a blatant exaggeration of possibilities that even a sixth grader should be able to see past.
The illustration above shows a typical scenario given a self-driving car: Upon the sudden appearance of a pedestrian or crowd in the street, does the self-driving car:
- swerve to avoid the crowd, even if that means intentionally killing a pedestrian who is not in the road (and presumably not in immediate danger);
- swerve to avoid a single pedestrian in the road, even if that means having an accident that injures or kills the car’s passengers; or
- swerve to avoid a crowd in the road, even if that means having an accident that injures or kills the car’s passengers?
The question basically assumes there are no better solutions, and the car will be “forced to make a moral choice” to pick who it must injure or kill.
And that assumption is flat-out wrong.
To begin with, there’s no morality or ethics involved with a self-driving car’s actions to avoid collisions; they are designed to stop as quickly as possible—not to swerve like a maniac driver in a buddy-cop movie. So it’s not going to be deliberating on the comparable value of life of individuals and “choosing” directions in which to swerve.
It must also be noted that in tests of self-driving cars so far—and this has included numerous tests of cars driving on public roads and executing cross-country trips—self-driving cars have already demonstrated exemplary driving prowess, and have covered thousands of miles without causing a single accident. (So far, every documented accident that has involved a self-driving car has been determined to be the fault of another human driver.)
There are two primary reasons for this. First, self-driving cars have comprehensive packages of sensors that give them a highly-detailed knowledge of their surroundings, to a degree much higher than that of the average human driver. Thanks to modern sensors and computers, self-driving cars see more, see it better, see it sooner, and are more completely aware of their surroundings than a human driver is capable of. They are also incapable of being distracted or losing focus, as human drivers regularly do. If there is a danger, it is detected by the car well before a human driver would normally see a danger coming.
And yes, there have been incidences of self-driving cars not recognizing a potential obstacle or obstruction in the road. For the record, there are many more such incidences with human drivers, often through distraction (cellphones) or visual issues (like sunlight in their eyes). And self-driving sensors are being improved all the time, partially thanks to these unexpected issues, so they won’t make the same mistake twice.
Second, the power of the computer means that self-driving cars can react to a danger in milliseconds… or a few thousand times faster than it normally takes a human driver to see a scenario, have it register on their brains as a threatening situation, and react physically. And as a human’s quickest reaction is often a panicked one, regularly making bad impulse decisions in a crisis, the self-driving car is much more likely to make a good decision than a human driver.
In short: Self-driving cars will be better than you. Which leads to the true, realistic solution of the self-driving car in the scenario posed above:
The self-driving car, having identified the danger far sooner than any human driver, has more than enough time to formulate a safe reaction without panicking, executes that reaction perfectly and stops, without causing an accident or hitting anyone. Duh.
And sure, there will be the incredibly rare situation when a car will not be able to react and avoid killing someone. But even without bringing morality into the situation, the car will be better able to react and cause a minimal loss of life than any human driver could do in the same situation. This is the reality of the self-driving car and its capabilities: Even in extreme scenarios, the self-driving car will be far safer than a car in the hands of the vast majority of flawed, distracted, slow, impaired and panicky human drivers. Period.
And given this undeniable reality, it continues to amaze me when I hear people insisting that they can drive better than self-driving cars; people actually believe they can see sooner and react better and more effectively than computers, and our roads would be safer without computer-controlled cars on the road. Without mincing words, it’s just plain delusional. We will all be safer when computers do our driving for us; and the sooner we accept that, and do everything we can to accelerate the process of putting us all in self-driving cars, the better off we’ll all be.
Reblogged this on Übermüdet Mirror.
LikeLiked by 1 person
Seriously? I can’t believe that there is such a fear out there. It’s always that nagging bunch that never wants humanity to progress. Ugh! Don’t worry, folks, self-driving cars can probably text and drive without killing anyone. YOU can’t.
LikeLiked by 1 person
That’s it exactly: People actively seeking any reason to distrust or condemn a new technology, even if it means inventing outlandish scenarios to try to create “aha!” moments. “Yes, in this one-in-ten- million-chances example, a human driver would be better. Harrumph!”
It really is a sign of technophobia, and as I’ve seen in countless discussions (including some responses to similar posts on Facebook), the reasoning behind it is not fully based on rationality. But that’s also what makes it impossible to refute: Logic is pointless in what is essentially an emotional repulsion to a new idea.
LikeLike
I agree with the over all point of autonomous vehicles being better drivers than people. What I don’t agree with is the assertion by the author that there is never a situation where the car will need to decide to crash causing injury to the occupants or pedestrians or other motorists. Just because a computer can react far far more quickly than we can doesn’t make a cars brakes more effective. If a pedestrian walks out from behind a parked van just 10 feet in front of the computer controlled car the car will have to swerve to avoid a collision with the pedestrian…….unless it’s equipped with infrared sensors in which case maybe it would detect the pedestrian before it can make direct visual contact. Even if that’s the case my point still stands, there will be times when a collision will occurand a decision will need to be made.
LikeLike
“Decide to crash”?
This is one of the biggest problems here: Discussions about self-driving cars are full off unnecessarily-loaded statements like the one above. Instead of “decides to crash,” try this one: “Recognizes that a collision is inevitable.”
Now: If a self-driving car is in a position whereby it recognizes that a collision is inevitable (because, yeah, it’ll happen), the most sensible thing to do is to minimize the damage caused by said collision. And in general, that is accomplished by straight braking… not swerving.
Why? Because swerving is almost always a panicked response, and usually a move that only works out through blind, stupid luck. A driver swerves to avoid a pedestrian in the road… only to point his car at a sidewalk full of onlookers. Or a car coming in the opposite direction on the road. Or a series of parked cars, between which are people waiting to cross the road. Swerving cars are rarely under the driver’s full control, making them even more dangerous than cars traveling in predictable directions (like straight ahead).
Are these random, one-in-a-million possibilities? Actually, they happen almost daily in the US. We’re almost always better off just tromping on our (hopefully anti-lock) brakes and hoping the moron who walked out in front of us has the Darwinian sense to jump back out of the road.
Let’s face it: If you’re really that concerned about inevitable collisions caused by people who have either the incredibly bad luck or lack of common sense that puts them in harm’s way of whatever is out there… then the problem isn’t the self-driving cars. The problem is LIFE. And LIFE (to put it politely) HAPPENS. You’d better go do something about that first, or it won’t make any difference who’s driving what.
LikeLike