While many law students view ChatGPT and other generative AI tools as a time-saving shortcut for their drafting, summarizing, and analyzing needs, many students (and most educators) are quick to condemn ChatGPT as a highly ineffective and inaccurate tool for legal research. It is notorious for getting substantive law wrong, for “hallucinating” cases that don’t exist, and for being completely inept at discussing anything more recent than its training data—September 2021. Even the fiercest advocates for using ChatGPT in law schools (which I consider myself) readily admit that ChatGPT generally falls flat as a tool for legal research (though a skilled user who is acutely aware of ChatGPT’s limitations may still effectively use it to supplement other research tools).
Yet this failing—which may be fatal to its present utility in law practice—may serve as a rich pedagogical tool in law school.
Law students complain mightily about the inadequacy of instruction on legal research, with most discovering they only “really” learn how to do research on the job. Likewise, legal research and writing instructors are often overwhelmed at the (ever-increasing) breadth and depth of skills they are expected to teach their students during the first year of law school. ChatGPT is certainly not the panacea that some make it out to be, but perhaps it presents a solution to this conundrum.
By instructing law students to use ChatGPT to generate text that is certain to contain inaccuracies, educators can teach students how to verify the information using more conventional and reliable resources like Westlaw and LexisNexis.
For example, let’s use the prompt:
If a New Jersey web marketing company refuses to offer its services to a person on the basis of that person’s sexual orientation, in violation of New Jersey law, but the business claims to be engaged in expressive conduct protected by the First Amendment of the U.S. Constitution, what is the likely outcome if the person sues under New Jersey law? Cite case law in the answer.
This prompt is likely to elicit a substantively incorrect answer for several reasons. First, it refers to the law of a specific state. Because of the way ChatGPT was trained, it tends to conflate the laws of multiple states and disregard nuanced differences. Second, it requires an understanding of the concept of supremacy. Third, and perhaps most importantly, a correct answer requires an awareness of the U.S. Supreme Court’s 2023 decision in 303 Creative LLC v. Elenis.
With the understanding that even the same prompt will produce slightly different results, here is what ChatGPT gave me in response to that prompt:
I’m not a legal expert, but based on existing legal frameworks, New Jersey law explicitly prohibits discrimination based on sexual orientation in places of public accommodation, such as businesses that are open to the public. The New Jersey Law Against Discrimination (NJLAD) is the relevant state statute in this situation. A web marketing company would likely be considered a place of public accommodation under this law.
The company’s claim that its services constitute “expressive conduct” protected by the First Amendment could bring into play a line of cases that consider the tension between anti-discrimination laws and free speech rights. One notable case is Masterpiece Cakeshop, Ltd. v. Colorado Civil Rights Commission, 584 U.S. ___ (2018). In that case, the U.S. Supreme Court didn’t settle the larger question but did emphasize the importance of treating religious objections neutrally, suggesting that discrimination laws can coexist with certain free speech rights if applied impartially.
Another case to consider is Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, 515 U.S. 557 (1995), where the Court ruled that a private parade organizer could not be compelled to include groups presenting messages the organizer did not wish to endorse. However, the facts in Hurley were unique and may not fully apply to a standard business operation like a web marketing company.
On the balance of case law and statutory law, the likely outcome of a lawsuit in New Jersey would seem to favor the individual suing the company. The New Jersey courts would probably hold that the web marketing company cannot refuse service based on sexual orientation, as that would be in direct violation of the NJLAD. The company’s First Amendment claim would be weighed against the state’s compelling interest in eradicating discrimination, and based on current precedent, it would likely not override the anti-discrimination statute.
This situation, of course, could vary depending on the specifics of the case, but as of my last update in September 2021, this would be the likely legal landscape.
This answer is full of substantive claims that can be readily verified or disproven.
A first-year law student with minimal research training can check to see whether the cases cited in the answer exist in those databases. The student will then have to read those cases (or the editor-written summaries of them) to determine whether they support the proposition stated in ChatGPT’s output. Further, the student would need to ensure the cases are still good law; during this step, the student is likely to discover the existence of 303 Creative. Armed now with a basic understanding of the relevant cases and the language the courts use to discuss this issue, the student can look more broadly at the substance of ChatGPT’s output and assess which parts are accurate and which are not. Other aspects of this assignment would be verifying the accuracy of the references to state law, which can be done via Google and/or government websites or via subscription databases.
An instructor (or TA) could prepare an answer key well in advance of this assignment, minimizing the need for oversight or time-consuming feedback. The subject matter is timely, the nature of the task is interactive, and students begin to learn the strengths and weaknesses of ChatGPT as they develop their legal research skills. This type of task is also excellent training for law review, where student editors must verify the substantive claims of the authors whose articles they are editing for publication.
As illustrated by this example, the shortcomings of AI in legal research should not deter educators from leveraging these tools to improve their pedagogy. Rather than expecting AI to replace traditional methods, educators and students can and should use these tools to facilitate deeper understanding and more rigorous legal research. The tools’ shortcomings are rich pedagogical opportunities.
By learning to scrutinize and verify AI-generated information, law students not only become adept researchers but also conscientious consumers of information. As legal educators, we must think creatively—particularly in this era of rapid technological advancements—to produce lawyers who are meticulous, critical, and ethically responsible in their research.
Perhaps there is some truth to the saying that “those who can do, and those who can’t teach.” So long as ChatGPT can’t effectively do legal research, it may serve as an innovative tool for teaching it.