AI can’t dream. It can’t write like a poet.
But it can get a B, and that changes everything.
I teach at Cornell University in upstate New York. Every fall, I introduce a new group of students to the complexity of the criminal legal system. I’ve taught the class for years, and by Cornell standards, it’s big. That’s more a commentary on student interest than professorial gifts; many students today care a great deal about policing, prosecution, and prisons—which is to say, they care about justice and its opposite—and that’s a good thing.
But they also care about grades. A lot. For some of them, way too much.
At a place like Cornell, most students are excellent test-takers but mediocre writers. So, I never give them tests. I make them write. A lot. For some of them, way too much. A student who aces an exam might be a nuanced thinker, but the former certainly doesn’t guarantee the latter, and they didn’t become the latter by getting good at the former.
Cornell students can learn the stubborn facts about mass incarceration. They can memorize the history of broken windows policing, stop and frisk, and the war on drugs. They can file away stories about super-predators and solitary confinement. And they can repeat it during a timed exam. Big deal—they could do all that in high school.
What they need to learn is how to engage critically but compassionately with a complex and angry world. To take the measure of an argument, turn it inside out, and reveal the assumptions that would rather stay hidden and the myths that would rather go unchallenged.
And most of all, they need to grasp the difficult but morally urgent truth that while some people have done monstrous things, there are no monsters, and that much of the world’s suffering can be traced to the seduction that some among us are less than human—and worse still, that you or I can reliably separate us from them.
In a large class, the only way to develop these skills is to wrestle with the written word. Students have to read closely, debate freely, and reflect deeply. Then they have to write. With (proverbial) pen in hand, they have to exercise both sides of the brain—or better, the head and the heart—to communicate not simply the bare facts, but their conception of the good society, one that is wise but decent, safe but just, secure but free.
The aim of this exercise is not that they arrive at “the answer,” as though there were only one way to understand, for instance, the vagrancy laws or hot spot policing. Instead, it is that they begin to understand how and why it is so difficult in the United States to achieve something we might call justice.
But that is only part of my goal. I also want them to develop their skills as writers. To craft something that is not merely clear and direct but also vibrant and alive. Something that frees itself from the great sucking muck of vacuous blather that surrounds them every day and approaches the elegance that is possible with the English language.
I want them to leave my class frustrated and angry, but also passionate and inspired. They already want to change the world; I want them to build the moral and intellectual tools they need to get the job done.
ChatGPT wants them to get a B. At least, it would if it could want anything.
I recently gave Chat an assignment. I asked it to write a seven-page paper about convict leasing, the prison labor system used primarily in the south in the late 18th and early 19th centuries, when state-run prisons leased mostly Black prisoners to private companies as a source of cheap labor. Convict leasing is an under-studied period that sheds light on the enduring link in this country between race, bondage, and money. I asked the same question of my students last fall.
Chat wrote a B paper in 90 seconds. I can’t say it was particularly good, but if I’m honest, nor can I say I’d be able to distinguish it from something a student might write. It was clear enough. Grammatically correct. About what we’d expect from a machine that trawls the writing that is out there to generate a close facsimile: responsive but not inspired. To be sure, Chat’s paper was short, but I assume students know how to entice the program to write more complete answers. After all, Chat has already written entire articles. And as Chat gets better, that B will become an A. Perhaps for some faculty, it already is.
And that’s the end. There will inevitably come a time, probably around 11:00 on a winter night, when a student who hasn’t slept for two days but who must submit a paper in an hour will decide that a Chat B is good enough. Yes, there are detection programs, but they apparently return some number of false positives, finding AI where there is only HI, which makes them useless.
That student will never struggle with the complex challenge posed by crime in a democracy. They will never hone their analytic skills or practice their moral voice. They will never learn the beauty of the language.
But far worse, because that student exists, they spoil it for everyone else. I wish students didn’t care about grades, but they do. And once it is known that you can either struggle for hours and get an honest B, or press a button and get a dishonest but undetectable A, the game is up.
It’s hard to see the silver lining that edges this dark cloud, but if it exists, it is that the rise of AI will be the death of grades. I’m not holding my breath…