Big Change to the LSAT May Alleviate Time Pressure for Some Takers

Posted in: Education

Notwithstanding some recent competition, the Law School Admission Test (LSAT) remains the most widely used and accepted standardized test considered by American law schools to admit new students to law school. That is why it is significant news that the President/CEO of the Law School Admission Council (LSAC), the organization responsible for designing, administering, and grading the LSAT, recently announced changes to the LSAT’s format, beginning in August 2024. In particular, the LSAC will remove a particular kind of question—formally called analytic reasoning” but popularly known as “logic games”—from future tests. The LSAC head explained that the organization had been “researching alternative ways to assess analytical reasoning skills . . . as part of a legal settlement,” and decided to replace the logic-games section with an additional section of “logical reasoning” questions that seek to “assess the same deductive reasoning skills” as the logic-games section had.

Those who have taken the LSAT anytime over the last four-plus decades know that the logic games section is distinctive in standardized testing: test takers are given a list of conditions or rules about a series of interactions among people or things, and are then are asked questions that require the takers to deduce further conditions or rules that, as a logical matter, must apply to hypothetical interactions involving the entirety or a subset of people or things involved. For example, takers might be asked to envision a round table that seats persons A-H, and then given a series of conditions about whom must be seated next to (or across from) whom, whom can never be seated next to (or nearby) whom, etc. From the information given in each setup, takers are then asked a cluster of questions, one of which might begin with something like, “If A is seated across from D, which of the following statements could but need not be true of the relative locations of persons B and E? (followed by statements such as “B and E are next to each other,” or “B and E are at least two seats apart” etc.)

Although I personally rather liked the logic-games sections of the LSAT when I took the test in the 1980s, many (perhaps most?) LSAT takers over the decades have identified this section/format as the “scariest” part of the LSAT. Certainly the format of logic-games questions is less familiar to standardized test takers than are the formats of other LSAT sections. For example, the “reading comprehension” part of the LSAT is similar in format to (even as it may be more challenging than) sections on the SAT (which used to be called the “Scholastic Aptitude Test” or the “Scholastic Assessment Test” but which for years has borne the formal title “SAT”) or ACT (originally short for “American College Testing”) exams that many students take when applying to college. Beyond unfamiliarity, one reason the logic-games section is scary is that each logic-games fact pattern (the series of conditions involving a particular set of people or things) gives rise to a number of test questions based on that pattern, and if a taker for whatever reason has a tough time deducing some information central to that particular set of conditions, the taker might miss a handful of LSAT questions at a time. So the stakes in being able to figure out the particular “keys” for a given logic-games setup can be quite high.

A second reason takers over the years have dreaded the logic-games section is that many students run out of time on this section; although I don’t have empirical data, over the years so many of my friends and colleagues have told me they felt much more harried working through the logic-games questions than they felt on all the other parts of the LSAT. This anecdotal evidence dovetails with my implicit observation above that there are a small number of “keys” to figuring out each logic-games fact pattern; if a taker sees the keys quickly, no problem. But if the keys don’t jump out at the taker, then panic and frustration can begin to set in, and time can seem to elapse quite quickly.

This last feature of the logic-games format—that it creates a great deal of time pressure for many takers—raises the question: Why do we include questions that takers can’t think about for a leisurely period of time before answering? And I’m not sure there is an adequate response to this fundamental question. (In this vein, consider this essay from a month ago discussing the decision that the makers of the SAT have to made to render that test less time pressured.)

Consider this hypothetical but plausible scenario (which builds on my own past thinking and academic work of other law professors like Bill Henderson of Indiana Law School): Suppose two law school applicants, Alan and Beth, take the LSAT at the same testing center on the same day. Each of them plows through the 101 questions contained in the four graded sections of the LSAT and tries diligently to fill in the bubbles on the multiple-choice scantron answer sheet that will be credited as “correct.”

Let’s begin with Alan. In general, he is able to spend as much time on each of the 101 questions as he needs to feel confident that he has picked the best or “right” answer. There are, to be sure, a few questions to which he would like to go back and devote additional attention if he had a bit more time. But overall he feels that he has been able to understand, unravel, and “solve” each of the queries in the four sections. When his exam is graded, we find out that Alan got 81 correct responses – he missed 20 questions, even though he felt he had picked the best response to each of them.

Beth, by contrast, is in our hypothetical unable to even read through—let alone think at all carefully about—the last six or so questions in each of the four graded sections. That is, for the last six questions in each Part, Beth has completely run out of time and thus has had to fill in the scantron bubbles completely at random. (In the LSAT, there is no penalty for guessing; one’s total score is determined simply by the number of correct responses; the number of incorrect responses is not subtracted from the number of correct responses or included into the formula in any other way.) So, in effect, Beth has “answered” 77 of the 101 questions, and has completely “guessed” at the other 24. But as to the 77 questions she answered, Beth feels pretty confident.

When Beth’s test is scored, we find out that of the 77 questions she had time to read and analyze enough to make her comfortable, she got 76 correct – a remarkably high rate! But as to the 24 questions on which she had to guess completely, she got only 5 correct – a number that seems in line with the fact that there are 5 answer options for each question. That brings Beth’s total number of correct answers to 81 (76 plus 5)—the same number achieved by Alan.

Alan and Beth will thus both receive the same LSAT score—their 81 “raw” score will likely translate to a “scaled” score somewhere in the mid 160s. (“Scaled” scores are the ones used by law schools and other institutions for purposes of comparing individuals.)

Now, I certainly believe a scaled score in that range—which places a person in about the in the 90th or higher percentile of test takers nationwide is a very good score, and one that would be considered by every law school as suggestive of substantial reasoning power. Yet one has to wonder whether such a score is a fair measure for Beth—whether such a score gives her enough credit.

If Beth had enjoyed an additional 30 minutes, say, to complete the LSAT (imagine an extra 7.5 minutes for each of the 4 graded sections), her raw score might have increased from 81 to something like 91 or better. A raw score of 91 would translate to a scaled score of about a score about 171 or 172 (at or above the 97th or 98th percentile nationally), which is just below the median score of students who attend Yale Law School, the law school to which it is most difficult to gain admission.

Additional time might have helped Alan, too, but not nearly as much. If he had a few more minutes to go over his responses, he might have caught a few of his careless errors, and might also have been able to think through better the couple of questions over which he was agonizing. But he probably wouldn’t have picked up more than two or three additional correct responses; remember, he was pretty happy with the amount of time and confidence he had for virtually all of his 101 responses.

From a law school admissions perspective, Alan and Beth appear to be identical—so far as LSAT performance is concerned—even though Beth, given just a little more time, would have substantially outperformed Alan on the test, and set herself apart.

The possibility—indeed the inevitability—that there are real-life Beths and Alans out there led Professor Bill Henderson over a decade ago to explore whether the LSAT places weight—perhaps too much weight – on speed as a test-taking skill. According to Professor Henderson, in the field of

“psychometrics [test design], it is widely acknowledged that test-taking speed and reasoning ability are separate abilities with little or no correlation to each other.” That is, a person’s abilities, respectively, to reason well and to reason quickly aren’t very related.

The LSAT is supposed to measure reasoning ability, or “power.” As Professor Henderson observed, “test-taking speed is assumed to be an ancillary variable with a negligible effect on candidate scores.” But that may not be true.

Traditionally, the LSAT makers have maintained that they are interested in testing only for reasoning power to discern the best answer, not for superior quickness in reaching the best answer. Yet it seems quite plausible to think the exam does value speed, whether by design or not.

LSAT scores are used by law school admissions offices primarily because they correlate to law school grade performance better than any other single criterion (including college grades) does. In other words, LSAT scores do have some meaningful correlation to law school grades. (A formula blending LSAT score and college GPA yields a number that correlates to law school grades better than does either LSAT score or college grades alone, but the LSAT undeniably has some predictive power here.)

Now consider these two points together, and the question they raise: It seems that the LSAT measures and values speed. And it seems that the LSAT correlates to law school grade performance. The question becomes: Is the correlation to law school grades we observe related to the speed aspect of the LSAT?

One possible reason for the correlation may be that law school exams also measure and value speed. Indeed, the majority of law school grades—especially those given in the first, and most formative, year of law school—are based upon in-class, three- or four-hour, “issue-spotting/issue analyzing” exams, in which students are asked to read hypothetical factual situations and to quickly identify and discuss the (perhaps dozens) of legal issues implicated by the facts. (No wonder these are nicknamed “racehorse” style exams.)

Would other ways of evaluating law student performance yield correlations to LSAT performance that are very different from those generated by the racehorse tests? Perhaps. Possible grading alternatives to in-class short exams would include take-home exams that students have 8 hours or more to complete, or papers (in lieu of exams) written over the course of weeks. But many of these alternative testing modes have always been complicated by the possibility of illicit collaboration (i.e., cheating) among students. And recent emergence of ChatGPT and other AI tools that might make the use of open-book, time-leisurely exams even more challenging.

What about the possibility that the extensive use of time-pressured, issue-spotting/issue analyzing exams in law schools is justified by the nature of time-pressured bar exams, or the nature of time-pressured legal practice. If we believe the use of these “racehorse” exams is indeed justified, then we should explain why and how. And if we can’t provide that explanation, we should ask ourselves why we use tests that seem to value speed so much.

One initial response legal educators might make is that the bar exam that all new lawyers must take is also a time-pressured, largely issue-spotting, affair. Thus, defenders of the status quo might argue, time-pressured exams before and during law school are perfectly appropriate training and/or screening devices for those who want to join a profession that uses a similar test as a non-negotiable barrier to entry.

But this response would simply widen the debate, for the bar exam structure is itself something over which legal educators should have a fair amount of influence. State bar examiners in most states involve law professors in devising and grading bar exam questions each year. Accordingly, if the legal educational establishment and the practicing bar became convinced and forcefully communicated their view that the issue-spotting question format on the bar exam did not do a decent job of assessing the skills needed for the successful practice of law, then there would be every reason to hope and believe that bar exam makers, over time, would heed this professional consensus and adapt the structure of the exam accordingly.

And, in fact, bar exam structure and content are being reconsidered in relatively basic ways these days: the so-called “Next Generation” bar exam project is carefully considering the subjects and skills that are appropriate to test, and the best formats to be used to assess knowledge of those subjects and mastery of those skills.

All of this brings us to two additional key questions: What are the most important skills in practicing law? And, how are those skills measured and/or ignored by time-pressured testing formats?

It’s true, of course, that lawyers must think and act on their feet quite often. And time-pressured exams might be assumed to do a fair job of measuring quick thinking. Lawyers who make oral arguments in appellate tribunals must process information and ideas and respond fast. So must trial attorneys deciding whether—and in what way—to object to (or defend) the introduction of testimony that is, at the same moment, coming out of the witness’s mouth, or, in the case of a document, being handed to them for the first time.

But it turns out that, at least as far as the LSAT and perhaps law school racehorse exams are concerned, the research to date might suggest that the oral advocacy skills of the kind most central to in-court trial or appellate work do not seem to correlate very well to time-pressured legal exams. Meanwhile, most lawyers focus not on oral advocacy, but rather on written work—motions, memos, briefs, contracts, releases, settlements, corporate filings and other documents. That leads us to ask whether the skills required in drafting these documents correlate to the skills measured by timed law school exams.

This is not an easy question to answer. There may be some data suggesting the relevance of some time-pressured exams to some real-world-like written products. And many people believe that even when it comes to courtroom advocacy, written products—briefs and motions—are more important in obtaining client outcomes than are live hearings or oral arguments.

More research needs to be conducted to evaluate how the way we assess (in law school) relates to the skills needed for success in the real legal world.

Posted in: Education

Tags: Law School, LSAT

Comments are closed.