Is Informed Consent Necessary When Artificial Intelligence is Used for Patient Care: Applying the Ethics from Justice Cardozo’s Opinion in Schloendorff v. Society of New York Hospital

Updated:
Posted in: Health Law

This is the first article in a series exploring ethical concerns about patient informed consent when Artificial Intelligence is used for clinical decision-making.

A cornerstone tenet in bioethics is that patients must be respected as autonomous beings and that their preferences should determine what is acceptable and unacceptable in their health care. Justice Benjamin Cardozo’s 1914 opinion in the case of Schloendorff v. Society of New York Hospital that “Every human being of adult years and sound mind has the right to determine what shall be done with his own body” ensconced ethics into law. Artificial Intelligence (AI) models that are intended to improve clinicians’ ability to diagnose, treat, and prognosticate are being widely deployed. An unresolved question in the ethics literature, and as of yet untested in the legal system, is to what extent patients must be informed that AI models are being used in their health care and in turn provide consent. From an ethics perspective, with potential application to law, what is the patient’s relationship to the AI model, and what are the normative and legal expectations of that relationship? Patients relate to AI models in several relevant ways: as patients, as data donors, as learning subjects, and in some cases as research subjects.

Patients as Patients

Patients, as autonomous beings, should be informed about how and why medical recommendations are being made. This includes whether their clinician collaborated with an AI model. Collaborating with an AI model differs from referring to a journal article or a clinical pathway, or consulting with a colleague in several ethically relevant ways.

First, a clinician can assume that journal articles, clinical pathways, and the opinions of colleagues are based on scientific evidence and that the recommendation is the most beneficial and the least harmful to the patient. Before a new version of a treatment or procedure is adopted, it must not only be proven to provide the outcome or effect that it claims but also be more beneficial and/or less harmful than the previous iteration. In most cases, AI models have not been subject to the same level of scientific rigor. While models may have been validated for accuracy, that does not automatically translate into increased benefit and/or decreased harm. For instance, one could hypothesize that an AI model that predicts which patients will develop chronic kidney disease would be beneficial. The model could be programmed and trained to make the prediction and its high degree of accuracy proven. However, that doesn’t automatically prove that the model is more beneficial and/or less harmful than a clinician would be without the model’s decisional support.

Furthermore, in the case of a journal article, clinical pathway, or colleague, a clinician can understand the reasoning behind the recommendation and in turn explain it to the patient based on scientific evidence. This is not true with most AI models, in which the exact variables on which the model’s prediction is made are unknown. Finally, a clinician can rely upon the conclusions of journal articles, clinical pathways, and colleagues to prioritize the patient’s benefit over other competing values such as increased efficacy or cost-savings. While multiple outcomes can realized simultaneously, the patient’s benefit must always come first. Physicians are ethically obligated to prioritize their patients’ interests, but there is no assurance that AI models similarly prioritize a patient’s benefit over other competing outcomes.

A patient’s ability to make determinations about treatments relies on the patient being respected as an autonomous being. Such respect requires not only that the patient be informed but also that the patient be able to trust that the clinician is truthful and transparent in their clinical reasoning, whether it be referring to a journal article, a clinical pathway, or a colleague in making a medical decision. This obligation extends to collaborations between clinicians and AI models. Because of the important differences between collaborating with an AI model and the ways in which clinicians have typically made decisions, the need to inform patients when AI models are used in their health care is even greater than in other settings. Failure to do so infringes on a patient’s right to determine what is done to their body and fails to meet the standard for informed consent. As well, failure can lead to mistrust and a new form of paternalism in which the clinician and the AI model purport to know better than the patient what is the right thing to do.

Patients as Donors

For patients to receive a prediction from an AI model, the model must have access to the patient’s personal health information. After the patient’s information is shared with the model and the prediction is made, the patient’s information may also be used in the future for training other models. Because patients’ personal health information is used to train AI models, patients must be able to make the voluntary and informed decision to donate their data.

The history of medicine is rife with examples of patients’ tissue being taken without their knowledge or consent and being used for purposes other than the benefit of the patient from whom the tissue was taken. In the age of technology, a patient’s medical information is their digital phenotype and is as much a part of them as is their tissue. Whereas a patient’s tissue contains their genetic information, their health record is an account of how their individual genes are expressed, suppressed, mutated, deleted, broken, and repaired.

When patients are used as learning subjects, by humans or machines, patients should be informed, and participation should be voluntary. Currently, very few models used in health care are “unlocked” and able to learn in the future from the predictions they made in the past. However, continuously learning AI models have the potential to be extremely powerful as they continuously improve the accuracy of their predictions. Continuously learning models are all but certain to be the next generation of AI models, and patients will be their learning subjects.

Patients as Research Subjects

Finally, participation in human research should be informed and voluntary. Patients are used in two stages of AI model development: to demonstrate an AI model is valid, and to prove the model is clinically beneficial. The first stage is most properly classified as quality improvement and thus does not require patients’ informed consent. During this stage, the model is proven to be accurate and performs as it claims. However, the second stage—proving benefit—constitutes research. Benefit is established by demonstrating that the AI model, either by itself or in collaboration with a clinician, is as good or better than the clinician alone. For researchers to show that the model is beneficial, patients would need to be randomized to each of the three arms: (1) model alone, (2) clinician alone, and (3) clinician and model together. Because the conclusions of such a study are generalizable, this stage is best classified as research. (Even if one claims that a clinical pilot is actually a quality improvement initiative rather than a research study, that would not automatically negate the need for patients’ informed consent.) The clinician-patient relationship is such that patients can rightfully expect certain things from their clinicians such as truthfulness, confidentiality, and maximum benefit and minimum harm. But clinicians cannot expect patients to donate their health data, or be learning or research subjects.

Failure to inform patients that AI models are making health predictions about them allows not only erodes trust in the physician-patient relationship, but also renders both the AI model and the model’s prediction opaque. If patients aren’t informed that AI predictions are being made in the first place it is unlikely they will be informed of the prediction itself. This could normalize the practice of using patients’ health information to make any prediction that a health system or payer desires without the patient being informed or giving their consent.

Once again, the history of medicine has more examples than it should of clinicians doing things to patients that they had no right to do without the patients’ informed consent. Seemingly, whether it is performing intimate exams under anesthesia, performing multiple operations by the same surgeon at the same time, or performing an operation, as in the case of Schloendorff, clinicians have either not questioned the need for informed consent, or have considered and rejected claims that informed consent was necessary. History threatens to repeat itself as AI models are introduced into health care. Ethics has not always been the guiding light. Where ethics fail, law and regulation can prevail.

Comments are closed.