Autonomous Cars and Surgical Robots: A Discussion of Ethical and Legal Responsibility

Updated:

As our world becomes more and more automated, with devices that do everything from recording video, to turning off appliances, to telling us where to go (and even taking us there, as in the case of Google’s self-driving cars), the ethical and legal question of “Who is responsible if something goes wrong?” becomes increasingly important.

Traditionally, in basic situations within the legal field of torts, or personal injury, the identity of a wrongdoer is often clear, or at least readily ascertainable.  If Debbie trips Peter, then Debbie is liable to Peter for his injuries.  If Acme Ladder Co. manufactures and sells a ladder that collapses when Pamela is on it, then Acme is legally responsible for Pamela’s resulting injuries.  In these clear-cut cases, the ethical duty is essentially the same as the legal one.

However, there are now increasing numbers of situations in which the responsible party is less clearly defined.  I will explore two devices that create such situations: autonomous cars and robots that assist with surgery.  With the advent of self-driving cars and surgical robots, the answer to the question of who, exactly, would be responsible (in an ethical sense) and liable (in a legal sense) in the event of an error or accident becomes murky.  Because ethics is a philosophical discipline, a decision as to who is the ethical wrongdoer may not correlate with a decision as to who is the legally responsible party.  Laws, which are codified and can change only through a deliberative (and often lengthy) process, tend to lag behind when it comes to adequately addressing novel situations created by rapidly advancing technologies.

In this column, I will explain the framework for evaluating a situation as to both ethical responsibility and legal liability, respectively.  I will then apply that framework to two types of devices that blur the line between human action and machine action—self-driving vehicles and surgical robots.  I will consider two different situations that present questions of whose responsibility it is when something goes awry while these devices are operating, and I will discuss why the framework fails to produce acceptable outcomes.  Finally, I will propose ways for the law to “keep up” with ethics, and argue for the importance of a flexible legal paradigm in this context.

Frameworks for Evaluating Ethical and Legal Responsibility

Let me start by defining the vocabulary I will be using in this discussion.  I divide action into two levels: legal and ethical.  In this framework, an ethical decision should always be legal, but an action that is legal may or may not be ethical.  (I use “should” because this is a prescriptive model, but I acknowledge that there are situations where this ideal framework does not apply, and the law requires unethical action.)

The law provides only a baseline for acceptable behavior.  Legislators and their laws and regulations—as well as other aspects of government—should be concerned with ensuring that people act in accordance with this minimum standard.  The sources of ethical standards, on the other hand, can be the drives to maximize utility, contribute to social good, effect fairness or justice, and so on.

In the context of licensed professionals, ethical rules come from profession-specific rule-making bodies, such as a state bar, board, or licensing agency.  These entities are uniquely able to consider the conventions of the profession, the limitations and capabilities of the individual professional, and other factors that are relevant to determining whether a particular action should be encouraged or discouraged.

The Basic Rules of Personal Injury Law and How They Are Enforced

Put simply, the law of personal injury provides a remedy for a plaintiff who can prove that the defendant’s action or omission caused the plaintiff to suffer harm.

The law imposes on all individuals a duty of reasonable care to others, determined to be how a “reasonable person” in the same circumstances would act.  If a person acts unreasonably, causing injury to another, then the law imposes liability on the unreasonable actor.

When the defendant is a licensed professional, the law will look to ethical rules as a guide.  For example, in a legal malpractice lawsuit, a plaintiff may point to the attorney’s violation of the rules of professional ethics as evidence that her conduct fell below an expected standard of care.  Similarly, in a medical malpractice suit, a plaintiff would seek to establish that the physician’s actions were at odds with the standards accepted by the community of specialists in that discipline in order to prove that he breached his duty of care to the patient.

A second essential inquiry is one of causation.  The law requires a showing of both “actual” and “proximate” causation before it imposes liability.  To prove actual causation, a plaintiff in a lawsuit must show either that she would not have suffered injuries if it had not been for the defendant’s actions, or in some cases that the defendant’s actions were a “substantial factor” in bringing about the injury.  To prove proximate cause, also known as legal cause, a plaintiff must generally show that the defendant should have reasonably foreseen that his actions would cause the type of injury that the plaintiff suffered.

The Basic Rules of Products Liability and How They Are Enforced

Having thus established the framework for establishing legal liability in a personal injury case, I now turn to a second theory of liability: products liability.

The maker of a product owes a duty of care to the consumer, as well as to anyone who might foreseeably come into contact with the product.  This duty of care encompasses the duty to design a safe product, to manufacture it free of defects, and to warrant that the product is fit for its ordinary purposes.  If the manufacturer falls short on any of these duties, then it is legally liable for injuries that result from that failure.

If a consumer is using a product that malfunctions and injures a third party, the third party would likely sue the consumer as the operator of the malfunctioning device.  In the interest of fairness and justice, the law permits the consumer to seek “contribution” from the manufacturer of the defective product (basically, it allows the consumer to transfer a portion of the damages he must pay based on the fault of the manufacturer), or indemnification—transfer of all the damages—if the consumer was entirely free of fault.

Autonomous Vehicles: Challenging the Legal and Ethical Frameworks

As a result of lobbying by Google, Nevada and recently California have enacted legislation permitting self-driving cars to take to the roads.  Both states require that a human be present in the car, sitting in the driver’s seat, and able to take control of the car at any time.  Although there have reportedly been no accidents at all during the automatic operation of Google’s driverless cars, it would seem inevitable that an accident will someday occur as their use becomes more prevalent.  Such an occurrence would present myriad issues for the existing framework of responsibility, as described above.  Consider, for instance, the following scenario:

Daniel is the backup driver in an autonomous vehicle.  He sits behind the wheel, as required by law, and is attentive to his surroundings.  It begins to rain lightly.  The car advises that in inclement weather conditions, a driver must manually operate the vehicle.  Because the rain is not heavy, Daniel believes it does not rise to the level of being “inclement weather,” so he allows the car to continue driving without assistance.  The car suddenly makes a sharp turn and crashes into a tree, injuring Daniel.

The most salient question that arises out of this hypothetical scenario is “Who is at fault?”  Should Daniel have taken the wheel when it began to rain, or was the car’s instruction too ambiguous to impose that responsibility on him?  Daniel would likely sue the manufacturer of the vehicle under a theory of products liability.  The manufacturer would argue that Daniel had a duty to operate the car manually when it began to rain.  In this scenario, only Daniel himself was injured, but how would responsibility be distributed if a third party had been injured as a result?

There is no clearly applicable ethical standard to govern how Daniel should have acted, and the appropriate legal framework is even less clearly defined.

Google ostensibly intends for autonomous cars to help people who are unable to drive, thus presenting a challenge for the present requirement that a person be present and ready to drive the car manually at any given time.  If a blind person is using an autonomous car and is faced with a situation requiring intervention, was that person ethically irresponsible for getting behind the wheel in the first place?  Or was the manufacturer wrong to produce the vehicle with the intent of aiding drivers who require assistance?

As is illustrated by this and similar scenarios, the law of products liability and personal injury as currently formulated are not equipped to address these questions.

Another Scenario Beyond the Bounds of Current Personal Injury Law: Robotic Surgery

A second scenario presents similarly complex questions of responsibility, albeit in a completely different context.

Surgeon Sam has a patient with a pancreatic tumor.  Sam has explained the surgical procedure to the patient, and the patient has provided informed consent for laparoscopic (that is, minimally invasive) surgery, with the use of a surgical robot (despite certain known risks of using the robot), and open surgery.  Sam begins the surgery laparoscopically and identifies the tumor as unresectable (that is, it cannot be removed) by conventional laparoscopic surgery.  However, Sam reasonably believes that the robot’s greater dexterity and precision can safely remove the tumor—indeed, that is the very purpose of the robot.  Sam sets up the robot and begins to operate on the patient robotically when the robot behaves erratically, injuring the patient.  The patient survives the operation but dies from the cancer shortly thereafter.

If the patient’s estate seeks to recover damages for her injuries, a number of legal and ethical issues arise:

Was it ethical for the surgeon to offer the robotic surgery as an option for the patient, knowing the inherent risks?

Even though the patient was informed of the risks of using the robot, the surgeon exercised his professional judgment in offering the procedure as a viable option.  The American Medical Association’s opinion on informed consent states that “[t]he physician’s obligation is to present the medical facts accurately to the patient or to the individual responsible for the patient’s care and to make recommendations for management in accordance with good medical practice.  The physician has an ethical obligation to help the patient make choices from among the therapeutic alternatives consistent with good medical practice.”

However, a surgeon does not have to ask the patient whether she wants him to use one particular surgical instrument or another.  Why, then, must he disclose the use of the robot?  What distinguishes the choice of using the robot from the choice of using a particular set of forceps?  Under generally accepted standards in surgery, however, it likely would have been unethical for the surgeon not to tell the patient about the use of the robot because of the degree to which robotic surgery differs from convention.

Was it ethical for the surgeon to decide to use the robot?

To answer this question would require looking at what a reasonable surgeon would have done in the same situation.  This question is no different in this situation than it is in other medical malpractice cases, but the corollary questions are more complex, as follows:

Who should be legally liable for the patient’s injury?

The situation is complicated by the fact that the patient died of her cancer.  In a jurisdiction that recognizes survivorship actions—that is, claims by a successor for any loss or damages that the deceased person sustained or incurred before death—the patient’s estate would likely sue for the injuries caused by the use of the robot.  But even if the surgeon had not used the robot, the patient would have died of cancer in the same manner as she did, in fact, die after the robot was used.

Assuming that the patient’s estate can sue for injuries that the patient incurred during the operation, the surgeon and hospital would likely seek indemnification from the manufacturer of the robot, alleging that the robot’s erratic behavior was the cause of the injury.  The manufacturer would then likely assert that the surgeon should not have opted to use the robot in that particular situation and thus that, by doing so, the surgeon assumed the risk of injuring the patient.

Other legal and ethical considerations arising from this scenario would be the duty of the manufacturer to ensure the operators of the robot are adequately trained, and the duty of the hospital to properly credential surgeons to use the robot.  Similar questions would arise in the autonomous car scenario, such as whether the driver was adequately trained, and whether the duty to ensure that drivers of autonomous cars are legally trained falls to the manufacturer, the state, or some other entity.

It is conceivable that no party in the surgical-robot scenario described above acted unethically or illegally, and yet it would seem unfair and unjust to deny the injured patient a remedy for her injury—even more so if she suffered greatly as a result of the intraoperative mishap.

Establishing a Dynamic Framework to Address Advancing Technologies

As these examples illustrate, rapidly advancing technology often can lead to ethical and legal inconsistencies.  Sometimes the law imposes liability where there is no ethical violation, and sometimes there are ethical duties imposed on entities that may not be legally implicated in the current legal framework for assigning liability.

Although there are no clear answers to how to address these shortcomings, I believe that the solution lies in a systems analysis (rather than a party analysis).  That is, instead of looking to the individual liability of each party involved in a chain of events, we should look to the entire system.

In the context of the autonomous car, the system consists of the manufacturer, the engineers, the state licensing agency, the driver, and the external factors such as weather.  When the circumstances are taken as a whole, it is easier and more accurate to apportion liability and fault and to compensate injured parties for loss.  Under the current model, the state would almost certainly not be part of the litigation.

For the robotic-surgery scenario, the system consists of the robot manufacturer, the hospital, the surgeon, the relevant ethical standards (if any), and potentially the state and federal governments.  Instead, by focusing on individual parties and assessing their relative fault in causing an injury, the current legal framework will always be a step behind new technologies, and the systems causing injury will not improve.  An analysis that looks to a system, not a party, analysis will, I believe, prove to be far more preferable and just.

One response to “Autonomous Cars and Surgical Robots: A Discussion of Ethical and Legal Responsibility

  1. Timothy Clemans says:

    There will be a push to have unmanned cars on the road for delivering goods, picking up people (driverless taxi cabs), dropping owner off at work and parking at owner’s house, gassing up, being maintenanced, etc.

    A big concern about autonomous cars is that they will have a high failure rate due to lack of maintenance. I think the autonomous cars should refuse to start up if they have not been maintenance outside of the required maintenance schedule.

    I imagine a future where law enforcement does not investigate crashes involving autonomous cars because fault is automatically determined using a standardized algorithm. Every autonomous car would have multiple video cameras and black boxes. There should be an automated process for determining fault based on the data from the cameras and black boxes.

    Autonomous driving may become default. The liability then could be “the autonomous car allowed an intoxicated driver to drive even after the car detected failure to maintain lanes.” I believe this situation already exists with passenger planes from Airbus where pilot inputs are evaluated by computers before flight controls are manipulated.

    I think the law should mandate fully autonomous-capable cars by 2030 that don’t allow unsafe inputs from a human to be communicated to the car itself. There should never be a human driver-caused accident. I also think pedestrian should be at fault for walking in front of any car that hits them outside of a crosswalk. There needs to be a standardized algorithm for deciding if an autonomous car should hit a j-walking pedestrian in a situation where no time to brake and emergency lane change would result in hitting another car or person.

    Summary: 1. Require every car to be capable of driving with no human input except destination 2. Have a legal document saying what decisions the car is to make in every situation 3. Have a automated system for deciding fault in an accident that does not involve judges, cops, lawyers, etc based on detailed digital records (driving record, video of entire drive, all data from sensors, the decisions made, maintenance records, etc) 4. Refuse to let cars execute unsafe human inputs 5. Have the cars automatically drive themselves to maintenance facility on a schedule