Surgeon and bioethicist Charles E. Binkley discusses the ethical implications and potential harms of using artificial intelligence (AI) in healthcare decision-making, particularly focusing on informed consent and physician responsibility. Dr. Binkley argues that patients should be informed when AI is used in their care, and that healthcare providers have a duty not only to inform patients of potential risks but also to mitigate those risks, emphasizing that the use of AI does not absolve physicians of their responsibilities to patients.
Surgeon and bioethicist Charles E. Binkley discusses the ethical implications of using artificial intelligence (AI) models in clinical decision-making, particularly focusing on patient informed consent. Dr. Binkley argues that patients should be fully informed about the use of AI in their healthcare, not only as patients but also as data donors and potential research subjects, to maintain autonomy, transparency, and trust in the physician-patient relationship.
Cornell Law professor Michael C. Dorf considers the implications of ChatGPT and other generative AI tools in law schools. Professor Dorf observes that for now, smart, well-motivated students will outperform AI in most tasks required of law students, but legal educators will soon have to grapple with the reality that banning AI-based tools will make less and less sense as they become more mainstream various ways in legal practice.
Cornell professor Joseph Margulies expresses concern over the ability of ChatGPT—the AI-powered chatbot—to draft increasingly sophisticated and accurate writings that some college students might use instead of putting in the painstaking work of writing on their own. Professor Margulies asked ChatGPT to generate a response to an assignment akin to one he would assign in his own class, and it generated a B-quality essay. He then explores what this means for student learning—particularly in the context of writing.
Charles E. Binkley, director of bioethics at Santa Clara University’s Markkula Center for Applied Ethics, describes some critical ethical issues raised by the use of artificial intelligence (AI) and machine learning (ML) systems for clinical decision support in medicine. Dr. Binkley calls for resolution of these issues before these emerging technologies are widely implemented.
Cornell University law professor Michael Dorf explores the relationship between renewed discussions about artificial intelligence (AI) and the rights of non-human animals. Dorf argues that our current portrayals of AI reflect guilt over our disregard for the interests of the billions of sentient animals we exploit, torture, and kill in the here and now.