Law and the Problem of Autonomous Cars

Fully autonomous vehicles will be in the hands of consumers in the near future.[1] Technology companies and car manufacturers have moved beyond small features like autonomous braking and autonomous lane assist, to fully workable cars that drive themselves with little human oversight. While the safety of the cars is largely uncertain, some initial reports bode well. Google’s self-driving vehicles have logged around 1.8 million miles on the road since 2009 and have been in only 12 minor accidents. And according to Google, none of these crashes were a result of the self-driving technology.[2] They assert that human drivers caused all of these accidents—either when the driver of another car caused an accident with the Google car, or when the Google driver took over the driving of the car and caused an accident.[3]

Yet, while driverless cars may be safer than their human controlled counterparts, when they do crash they present unique moral and legal problems. For example: should the carmaker be liable for accidents occurring in a self-driving car (and if so under what circumstances)? Should the driver have any responsibility to take control of the self-driving car in order to avoid accidents? In addition to these relatively straightforward moral/legal questions, even deeper problems might arise. To illustrate some of these possible complexities, we first turn to a series of hypotheticals posed in a recent article.[4] Then, with all of these potential problems in mind, we look to possible ways the law could respond.

A Hypothetical

Professors  Bonnefon, Shariff, and Rahwan imagine three scenarios in which a self-driving car cannot avoid some harm to humans. The most interesting of these is a case in which a self-driving car with a single passenger is driving down a road and encounters a number of pedestrians in front of it. The car calculates that it can stay on the road, save its passenger, and kill several pedestrians, or that it can swerve off the road, kill its passenger, but save the pedestrians.[5] If there are 10 pedestrians in the road and one passenger in the car what should the car be programmed to do?

There is no clear answer to this problem. From a utilitarian perspective it is plausible to argue that the car should be programmed to kill the passenger and save the 10 pedestrians. However, if self-driving cars are significantly safer on average than non-autonomous cars, there may be reason for the car to kill the 10 pedestrians.[6] A car purchaser who knows that the car would save the pedestrians might be considerably less likely to purchase a self-driving car. If enough people make the decision to forgo purchasing the automobile, the average safety benefit from having more autonomous vehicles on the road may never be realized. It may be, then, that the car should protect the driver so that more people will purchase autonomous cars.

As we can see, this question is highly complicated even if we restrict ourselves to a utilitarian approach. And even if this particular problem does not arise in the future, certainly there will be complex questions about the responsibility of carmakers when self-driving cars crash.

A natural question to ask then is whether, and, if so, how the law should respond to these potential moral and practical problems. I will not weigh into the debate about whether the law should respond, but will briefly sketch out two ways in which the law could respond.

How the Law Could Deal with Autonomous Cars

Statutory Law

The first route for law to deal with the practical and moral quandaries above is for legislatures—either at the state or federal level—to pass statutes creating a body of statutory law about self-driving cars. Four states and the District of Columbia have, in fact, already started to develop statutory law regarding autonomous cars.[7] California’s law, for instance, requires autonomous cars to allow passengers to disengage the autopilot and to warn passengers when there is a failure in the autopilot system.[8] Other states require that a person sit in the driver’s seat and be able to take control of the vehicle.[9] Yet, these laws are reasonably sparse and far removed from answering any number of questions that will arise when autonomous cars are sold on the open market.

States (or the Federal Government) could potentially expand these statutes to respond to new problems associated with autonomous cars. To deal with the hypothetical posed above, a state could require that autonomous cars protect their driver at all costs, or that autonomous cars make the decision that saves the most lives.

Tort Law

Another route suited to dealing with the problems of self-driving cars is tort law, and particularly products liability law.[10] Instead of (or in addition to) general statutory safety standards imposed on autonomous cars, legislatures could leave certain problems to the courts to deal with on a case-by-case basis. Questions of a carmaker’s liability for problems arising from the operation of the vehicle would be particularly suited to products liability. While I will not delve into products liability law in any depth, note that injured persons could bring manufacturing defect, design defect, failure to warn, or breach of warranty claims. Courts could respond by adopting a negligence standard, a strict liability standard, or by refusing to impose liability on the carmakers.

Going back to hypothetical, let us assume that the car was programmed to save the most people it could. If the car crashed and killed or injured the driver how would the court respond to a product liability claim brought on behalf of the driver? If the carmaker did not warn that the car would act as it did, the driver might have a valid failure to warn claim. Even if the carmaker did alert the driver, there might be a breach of the implied warranty of safety—if the car could have saved the driver it should have. Yet, if the car had injured the pedestrians, they might similarly be able to bring liability actions against the carmaker. Adding a further wrinkle, the driver or pedestrians might have a strong claim that the carmaker foresaw the possibility of such injury since it programmed the car to injure.

Comparing the Approaches:

Each of these two approaches has potential downsides. Because of the speed with which the technology of autonomous cars is developing, any comprehensive statutory scheme may well be rendered obsolete not long after enactment. Further, a broad statutory scheme might hinder natural advancement of the technology. If developers of the technology are forced to abide by outdated rules they may be cornered into decisions that do not actually lead to the safer development of autonomous cars.

Instead, some have found the solution in products liability law—a doctrine many believe “has proven to be remarkably adaptive to new technologies.”[11] The judicial case-law method may allow the legal system to take a more incremental approach to the problems arising from autonomous vehicles—something especially important when the problems are evolving rapidly.

Yet some commentators have disagreed with these premises and found the incremental approach itself misguided. Instead, they have argued that a comprehensive regime (and even a federal comprehensive regime) necessary to “minimize the number of inconsistent legal regimes that manufacturers face and simplify and speed the introduction of this technology.”[12]

In the end, a hybrid approach like the one suggested by ­­­­John Villasenor may be the most reasonable response. Instead of dealing with self-driving cars wholly in any single way, he argues it would be best to develop a federal scheme that prescribes a basic level of safety, but that allows for the states to fill in gaps with judicial decisions or more pointed state statutes.[13]

[1] See John Greenough, The Self-Driving Car Report, Business Insider (July 1, 2015, 2:00 PM), http://www.businessinsider.com/report-10-million-self-driving-cars-will-be-on-the-road-by-2020-2015-05 (suggesting that user-operated fully autonomous cars will arrive on the market within five years).

[2] Adrienne Lafrance, When Google Self-Driving Cars Are in Accidents, Humans Are to Blame, The Atlantic (June 8, 2015), http://www.theatlantic.com/technology/archive/2015/06/every-single-time-a-google-self-driving-car-crashed-a-human-was-to-blame/395183/.

[3] Matt Ritchel & Connor Dougherty, Google’s Driverless Cars Run Into Problem: Cars With Drivers, The New York Times (Sept. 1, 2015), http://www.nytimes.com/2015/09/02/technology/personaltech/google-says-its-not-the-driverless-cars-fault-its-other-drivers.html.

[4] Jean-Francois Bonnefon, Azim Shariff, & Iyad Rahwan, Autonomous Vehicles Need Experimental Ethics (Oct. 13 2015) (unpublished manuscript), available at http://arxiv.org/pdf/1510.03346v1.pdf.

[5] Id. at 3 fig.1.

[6] See Why Self-Driving Cars Must Be Programmed to Kill, MIT Technology Review (Oct. 22, 2015), http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/.

[7] Claire Miller, When Driverless Cars Break the Law, The New York Times (May 13, 2014), http://www.nytimes.com/2014/05/14/upshot/when-driverless-cars-break-the-law.html.

[8] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201120120SB1298

[9] Automated Driving: Legislative and Regulatory Action, Stanford.edu, http://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action (last visited Nov. 20, 2015).

[10] See John Villasenor, Products Liability and Driverless Cars (Center for Technology Innovation at Brookings ed., 2014), available at http://www.brookings.edu/~/media/research/files/papers/2014/04/products%20liability%20driverless%20cars%20villasenor/products_liability_and_driverless_cars.pdf.

[11] Id. at 15.

[12] Id.

[13] Id. at 17.

Comments are closed.