Last month, the National Highway Traffic Safety Administration (NHTSA) released proposed policy guidelines for the development and regulation of self-driving cars. Although the guidelines were the product of consultation with industry and state regulators, they are deliberately styled as preliminary, with input sought from the public about how to improve them. Given their preliminary status and the vagueness of many of the guidelines, it is probably more accurate to refer to the NHTSA document as a proposal for a policy rather than a policy itself.
Nevertheless, observers rightly treated the announcement of the NHTSA guidelines as an important indication that widespread use of self-driving cars on public roads will soon be a reality. Government agencies do not produce hundred-plus-page reports in response to science fiction. By comparison, the Obama administration explained why it rejected a petition to build a Star Wars-style death star in four sentences.
We can anticipate three phases of government policy regarding self-driving cars. In the first and current phase, government regulators working with industry leaders anticipate and look to head off problems, so that self-driving cars can be introduced and sold to the public. The second phase will be one of transition, as new driverless or semi-driverless cars share the road with the legacy fleet of manually operated cars.
This column addresses an issue that will arise in the third and final phase, when driverless cars dominate the roads: Should people who prefer to drive for themselves be permitted to do so? The NHTSA guidelines do not address this question, but in twenty years or so, state and federal regulators will need to. With progress on self-driving cars coming much faster than many experts anticipated, it is not too early to begin thinking about whether we should work towards banning human driving on public roads.
Why Ban Manually Operated Cars?
People fear new technology. Thus, although 2015 saw almost 40,000 Americans killed and over four million seriously injured on the roads, it was bigger news this past summer when the driver of a Tesla being operated in autopilot mode died.
To be sure, that is not a fair numerical comparison, because there are so many more cars on the road being operated by human drivers than by computer. Still, Tesla, Google, Uber, and other companies developing and testing driverless cars understand that to earn the trust of the public they must market technology that is not merely as safe as the cars currently being driven by humans but at least an order of magnitude safer.
What should happen when self-driving cars achieve overwhelming superiority in safety relative to human-driven cars? If a self-driving car is ten times less likely to get into a serious or fatal accident than a human-driven car, should government nonetheless permit humans to drive their own cars? If so, is there any safety ratio that would justify a ban? What if self-driving cars are a hundred times safer? A thousand?
Beyond the safety improvements we can expect from self-driving cars relative to human-driven cars, there are network effects to consider. Two self-driving cars at risk of colliding can communicate with one another in nanoseconds in order to coordinate accident-avoidance maneuvers. A self-driving car nearing a collision with a human-driven car can only move itself; the slower reaction time and uncoordinated decisions of the human driving the other car create a further safety disadvantage.
Thus, the argument for banning human-driven cars is really quite simple: It would save many lives and avert many more serious injuries.
Reasons to Drive Yourself
What sorts of reasons might nonetheless be invoked against banning human-driven cars? Consider a few.
Some people might want to drive themselves simply for entertainment. Go-kart racing and computer simulated driving games are popular because driving can be fun.
Other people might want to drive their own cars less as a form of thrill seeking than as a kind of retro chic. Just as today’s hipsters prefer typewriters to laptops, vinyl records to Spotify, and penny-farthings to lightweight bicycles with multiple gears, the hipsters of 2040 might derive ironic satisfaction from piloting their 2012 Hyundai Sonatas on roads mostly filled with self-driving cars. Even today, owners of antique cars occasionally take their Model A Fords out for a spin.
Still others might want to drive themselves in order to feel safer, even if doing so does not actually make them safer. A tiny number of exceptionally good drivers might even be right about their ability to drive more safely than the robocars—although as technology improves, their numbers would dwindle to zero.
Yet even if people’s feelings about safety do not correspond with actual safety, they still have an interest in driving themselves. Many people are afraid of flying, even though for any substantial distance it is a considerably safer mode of travel than driving. We nonetheless allow people to travel by car or bus because we recognize that involuntarily subjecting them to the anxiety that accompanies flying would do real harm. People with similar anxieties about self-driving cars would suffer like harm from a ban on human driving.
Weighing the Competing Concerns
How should regulators trade off the additional safety that a human-driving ban would achieve against the real and perceived benefits that some people derive from driving their own cars? If we apply simple cost-benefit analysis, it would seem that the benefits of banning human-driven cars from the roads outweigh the costs. Tens of thousands of annual saved lives and millions of averted serious injuries pretty clearly outweigh not just the forgone satisfaction of future retro hipsters but even the more substantial harm of anxiety for the robophobes.
But is simple cost-benefit analysis the right tool? Our law has a libertarian streak. It generally permits people to engage in risky activities for entertainment and other purposes, even when policy experts think that the risks are not worth taking. Quiet reading and chess are safer than skydiving and mountain climbing, but we don’t ban the latter activities on the ground that people can entertain themselves in safer ways.
Indeed, we don’t even require that everyone drive a Volvo. If you want to save money or trade safety for performance, you can. In general, the law requires that cars and other products be “safe enough,” not that they be as safe as the safest car (or other product) on the market.
Are today’s human-driven cars “safe enough”? They are for us, but that does not mean that they should be regarded as sufficiently safe in a couple of decades, when self-driving cars have changed the baseline. Safety equipment improves over time and when it does, prudent regulators raise the standards. Prior to the 1940s, few cars on the road had turn signal lights, but no one today would argue that it should therefore be permissible to sell or drive a car without them—even though hand signals can be used as a substitute.
The case against regulation is strongest when people seek the freedom to take risks for themselves but not for others. Even then, however, paternalism sometimes wins out over libertarianism. Moreover, many ostensibly self-regarding acts have social consequences. The adult who dies because he exercised the freedom to ride a motorcycle without a helmet (as permitted in some states) might orphan his children at great psychic cost to them and financial cost to the state.
Seen in this light, the case for eventually banning non-self-driving cars is especially strong. There is at least a comprehensible libertarian argument that adults should be able to decide for themselves whether to wear helmets or buckle their seatbelts, because these safety devices protect the users themselves. By contrast, many of the expected safety gains of self-driving cars will accrue to people riding in other vehicles.
Some day in the not-too-distant future, the claim that people should be permitted to drive their own cars will sound as implausible as the claim that people should be permitted to drive drunk.
Podcast: Play in new window | Download
Some of us will NEVER want a self-driving car. How about solutions for some other far more mundane tasks…like folding laundry and emptying the dishwasher?
I completely disagree with self driving cars. Perhaps in the liberal, citified universe of the left, you can set up a specified area for them. Oh wait, they already have mass transit. I personally, I do not want computer driven vehicles on “my roads’. I work in IT, computers have glitches. Glitches cause failures, in this case: Glitches kill people. Computers can be hacked, imposed with viruses, etc, even though they are separate from other networks. Computers should never be given full control over people. This article indicates a certain percentage of drivers are better and you have ‘no proof’ that percentage will go down. Plus I would argue that percentage is definitely higher than a report or study done ‘in defense’ of a project. In addition there are the many unintangibles that a program can never take into account. I.e., If it came down to it I would purposely take myself out to save a child or young person.
Thank goodness driving your own car isn’t even remotely explicitly mentioned in the Constitution or the chance of rational policy developement would be exactly 0%.
That something isn’t mentioned in the Constitution has nothing to do with whether the government has any authority to ban it or not. The only reason the Bill of Rights itself was added was because the anti-Federalists would not support ratification without one. Otherwise, as the authors of the Constitution saw it, a Bill of Rights was not only unnecessary, but even dangerous, as it says that the government cannot do things for which it is given no power to do in the first place by the Constitution, and thus could be construed that therefore the government does have some authority to restrict such things, or things not explicitly mentioned in said bill of rights (this is what you are doing).
Do we really want self-driving cars?
Will self-driving cars mean the end of highway fatalities?
In the same way, one might ask:
“Will the complete prohibition and confiscation of privately owned firearms mean the end of homicides by shooting?”
And further:
“Will the act of mandatory surveillance of every minute in our life mean the end of domestic violence, rape, public violence, homicides and theft?”
If technology so permits, it is always possible to mandate ever more prohibitions, restrictions and limitations on practically everything that might conceivably pose a risk to humans – and then the question of whether the act will mean the end of phenomena which we fear or dislike will increasingly be a moot one.
The point is that if the aim is “zero tolerance” towards traffic accidents and casualties, homicides or fatalities from children falling down from trees in the kindergarten, people will have to realize that those who give up more and more of their choices and possibilities in order to gain (an illusory) safety will in the end lose both their freedoms and the safety. If self-driving cars are to function optimally the road and the whole traffic must be structured around a complete prevention of people driving their own vehicles – including four-wheel cars and motorcycles.
Thus, the discussion around self-driving cars is not about ending highway fatalities. It is about society’s capability of tolerating the presence of risks in everyday life. It is sometimes stated that self-driving cars are coming slowly but surely. That is wrong. The driverless cars, and the accompanying inevitable universal prohibition of manually driven cars in such a scenario, do not “come.” Rather, they are brought in by humans who are willing to sacrifize their abilities to drive a vehicle, their freedom to use their driver-controlled vehicle anytime, anywhere and with the use of their own senses and motor abilities.
Yes, by all means manual driving with our own acquired abilities entails risks.
To let children climb in trees entails risks.
To permit people to own firearms entails risks.
I live in Norway. In 1970, 560 people lost their life in the road traffic in Norway. In 2012 with vastly better roads, but with 2.8 million cars and 5 200 000 – 5 million two hundred thousand – inhabitants, a mere 154 people died in traffic accidents. That number is far lower than the annual number of people who fall prey to influenza in Norway. To put that in proper perspective, it is estimated that during the first decade of the 2000’s between 7000 and 8000 humans expired from their use of tobacco. Around 12 000 people in Norway die from cancer each year. Suicide claims 900 lives. Each year, there are approximately 42 000 deaths in Norway among 5 000 000 people. Of these, less than 200 die from traffic accidents. In 2015, only 125 were killed in a population of more than 5 200 000. That is hardly an enormous number. We need to put this in proper perspective. In the US, there are more than 322 million people in 2016. Naturally the number of traffic accidents will reflect the larger population, but the car is not a terrible killing machine even if it presents risks to us.
In the US, 2,626,418 people died in 2014 from all causes. Of these, 32,675 died in traffic accidents. Can that be tolerated? To put the numbers in perspective, let it be mentioned that in 2014 all the automobiles in the US together travelled 3,026 billion miles. That means that there is one fatal accident in 1.08 million vehicle miles travelled. Frankly, it bodes no good for our society if these numbers leads society to outlaw manual driving. Do we really want to give up our freedom, bit by bit, for a little more “safety”?
The force behind some people’s fascination with self-driving cars is precisely the same tendency as we see when children are denied their right to climb in trees because they might fall down and become injured, and when some people would like universal electronic surveillance by the state in order to put an end to crime.
Needless to say, the main driver behind the attempt to force self-driving cars on us are the market forces. A very large number of technology companies, research institutes and car manufacturers all see considerable opportunities for profit from self-driving cars and technology – at the expense of our freedom to control our own vehicles. No matter what the situation is, a self-driving car will not let you judge the situation and make individual choices. This greatly reduces the usefulness of the individual vehicle. Moreover, in a self-driving scenario people will lose their abilities to drive manually – and thus it will in such a scenario rightly be said that now people cannot be permitted to drive their own vehicles.
The main argument used to convince the public that we need to give up manual driving is “safety.” That is what we need to withstand and counteract. We need to say openly that we understand the risks, and that the proponents of self-driving cars have no right to lay claims to moral superiority – the take away our freedom is not more ethically defensible than our choice of accepting risks and the responsibilities that invariably go with freedom.
Every time, it is possible to defend a new restriction with the treacherous allure of putting an end to an unpleasant phenomenon or a danger – or some real or imaginary expense for that matter. Given the technological possibility, there is no limit to how far one can go in restricting what people are allowed to do.
Alas, maximum safety means minimum choice, minimum freedom. In other words the freedom, the choices and ultimately many of the things that enrich life and give us capabilities in our everyday life will be removed – but the complete safety will forever elude us in a living world. That is the way it is; like it or not. Thus it is our challenge as individuals and on the societal level to accept that life entails and will always entail risks to us.
To conclude, the question is not whether one can end fatalities on the highway or for that matter anywhere else. The really important question is how far we are willing to go, and how many things we are wiling to deprive ourselves of in order to try to eliminate what we fear or dislike – or what is politically correct to remove at any given time.
– Perhaps, just perhaps, it would be wiser to accept life’s risks.
Per Inge Oestmoen, Norway
There are several very large (nontechnical) impediments to self driving cars. The largest is public and private infrastructure. Self driving cars need well maintained roads with clearly painted lines to operate (or perhaps in the future buried signal wires in each lane). Will there be a political will to invest the large amount of money necessary for these public infrastructure upgrades? Private infrastructure barriers will be the cost of the self driving autos and the almost certain intensive maintenance (much like aviation) necessary to keep them safe. Should autos be restricted to upper income groups? How will the economically marginal people get to work? Will public transportation be expanded to cover everyone’s needs? Who will pay for this?
The driving public demands all weather, 24 hour autos. Self driving cars will need that capability which may be beyond any feasible technical solution.
I also agree with MomShoots post below. Commercial aviation has invested huge sums in automation yet glitches still occur and accidents are avoided (mostly) by the presence of human pilots (who spend many hours in training) and who can correct the issue at hand.
It seems to me that a ban isn’t required. The history of safety equipment on cars has been that while new safety measures (seat belts, air bags, ABS, etc) are required for new vehicles, they aren’t required for old vehicles. While there are still cars on the road without seat belts or crumple zones, they are few and far between.
I can see an incremental approach, where collision avoidance, emergency braking, etc technologies will first become luxury options (they are now), to standard options, to required, as new technology comes along. Eventually, full self-driving will become standard or required, and only the aging fleet will be manual drive.
It may be that the network benefits of self-driving cars will outweigh the benefits of driving an older car, and only hobbyists will drive manual cars (much like only hobbyists drive classic cars and Model T’s today) rather quickly. But I think an outright ban won’t be needed.
The idea of a self-driving car as having no option for a human being to over ride the controls is ludicrous. What happens if you’re in a riot situation like the 1992 Rodney King riots? Is the car going to just dutifully stop for the rioters blocking your car when you may need to gun the engine? What about the example of that guy who the motorcyclists tried to block and he ran one of them over to escape? What if you are sitting at a red light and two men with guns begin approaching your car. Do you want the car to just sit there until the light turns green or be able to hit the accelerator? What if there’s a tornado that touches down? And you want the car to go in a different direction then maybe it wants to because your mother is back in the town the tornado is heading to and you want to try and zoom back to run her out of there? What if there’s a flood (as happened in Japan after the earthquake some years ago)? Is the car just going to keep driving and going into the water or will it make a fast U-turn and head in the opposite direction. And how will it know which opposite direction exactly to take? What if there’s an earthquake and the roads are all completely messed up? What if there’s a huge wildfire and you have to make a hasty escape?
There are just too many situations where a human may need to take over control of the car. Plus the fact that many people genuinely enjoy driving as well. I do think that self-driving as a feature will become available or even standard eventually, and that will be great, as there have been many a morning where I would prefer to just go out to the car, hit “Go to Work” and let it take me to work and that be that, but with the feature where I can still retake control over the vehicle if necessary.