In February 2025, the ABA House of Delegates concurred in a set of revisions to Standards 302, 314, and 315 of the Standards and Rules of Procedure for Approval of Law Schools. The revisions took effect in August 2025. Full implementation is required by the start of the 2026-2027 academic year, and site teams will expect to see compliance progress beginning this August. Seventy-three law school deans signed a letter opposing the revisions, warning that they were "unnecessary and could potentially harm legal education." The House adopted them anyway.
The revised standards impose three obligations that, taken together, represent the most significant change to law school assessment requirements in a decade. First, Standard 302 now requires every law school to establish not only programmatic learning outcomes (PLOs) — the knowledge, skills, and competencies students should possess upon graduation — but also course-level learning outcomes (CLOs) for every course the school offers. Those CLOs must be "sufficiently specific and measurable," and for required courses, they must be uniform across all sections. Second, revised Standard 314 requires that every course in the first third of a student's J.D. credit hours include at least one formative assessment, with feedback that allows students to evaluate their performance relative to the course learning outcomes. Schools must also provide academic support for students who fail to achieve a satisfactory level on those assessments. Third, Standard 315 requires an ongoing evaluation of the school's program of legal education, including the relationship between CLOs and PLOs, submitted as a formal report during the decennial accreditation visit.
Each of these requirements is defensible as a matter of educational design. Together, they create a compliance workload that most law schools are not staffed to absorb, on a timeline that leaves little room for deliberation.
The scale of what the standards require
A midsize law school might offer 150 to 200 courses in a given academic year. Each of those courses now needs specific, measurable learning outcomes. For required courses, those outcomes must be identical across every section: the three professors who teach Contracts cannot each have their own learning outcomes, because Standard 302(c) mandates uniformity. Every set of CLOs must connect to the school's PLOs, and the school must be prepared to demonstrate that connection through a curriculum map or comparable document. The Standard 315 report template, released by the ABA in draft form in November 2025, asks for ten categories of information, including how the school assesses student attainment of each PLO, how assessment data has prompted curricular changes, and whether the CLOs themselves need revision.
For schools that have treated learning outcomes as a box to check on a syllabus — a sentence or two at the top of the page, drafted by the individual professor, reviewed by no one — the gap between current practice and the new requirements is substantial. And the formative assessment obligation under Standard 314 adds a second layer. Every first-year course must now include at least one formative assessment that provides feedback tied to the course's learning outcomes. Interpretation 314-2 specifies that acceptable feedback includes written faculty comments, model answers, or individual or group review sessions. The prior version of this interpretation contained language stating that "[a] law school need not apply multiple assessment methods in any particular course." That language has been eliminated.
The people best positioned to describe the burden are the ones doing the work. Susannah Pollvogt, writing for the LSAC, acknowledged that the scope of compliance can feel "overwhelming" and noted that most law schools have never designated an associate dean of assessment. The four-step plan she proposes — revisit PLOs, map the curriculum, revise CLOs, develop a five-year assessment plan — is sound in structure and staggering in scope. A school that has not yet begun this work has roughly four months to produce measurable learning outcomes for every course in its catalog, align them to programmatic outcomes, build a curriculum map, design formative assessments for all first-year courses, establish feedback mechanisms, and prepare the documentation infrastructure for a Standard 315 report.
That is the kind of problem an LLM can help with, if you understand what the help looks like and where it stops.
This is a design problem, not a documentation problem
The temptation — and I expect many schools will yield to it — is to treat the revised standards as a paperwork exercise. Write some learning outcomes. Put them on the syllabus. Check the box. This approach will produce compliance artifacts that satisfy an auditor reviewing a binder and change nothing about how courses are taught or how students learn. It also misses what the standards are actually asking for.
What Standards 302, 314, and 315 describe, taken as a system, is backward design: the curriculum design framework articulated by Grant Wiggins and Jay McTighe, in which you start with the desired outcomes, design assessments that measure those outcomes, and then build the instructional activities that prepare students for the assessments. The ABA did not use the term "backward design" in the standards. But the structure is unmistakable: define what graduates should know (PLOs), define what each course contributes to that goal (CLOs), assess whether students are achieving the course-level outcomes (formative and summative assessment under Standard 314), and evaluate whether the aggregate course outcomes are producing graduates who meet the programmatic goals (Standard 315).
Legal education has historically not done this. The dominant pedagogical model in American law schools — assign cases, question students in class, administer a single end-of-semester exam — was not designed around learning outcomes. It was designed around sorting: identifying which students could perform a specific kind of analytical task under time pressure. The Socratic method is a powerful teaching tool, but it was not built to demonstrate measurable attainment of specific competencies, and the traditional final exam was designed to rank students, not to evaluate whether a course achieved its stated learning goals.
The revised standards ask law schools to retrofit an outcomes-based assessment framework onto a curriculum that was not designed with outcomes in mind. That is a genuine intellectual challenge, not a clerical one. The question is not "what learning outcome can I write that sounds good on this syllabus?" It is "what should a student be able to do after completing this course that she could not do before, and how would I know?"
What an LLM can do
The generative labor involved in outcomes-based design is substantial, and much of it falls squarely within the capabilities of current large language models. An LLM can draft learning outcomes using appropriate taxonomic language: Bloom's cognitive levels (remember, understand, apply, analyze, evaluate, create) mapped to the specific content of a law school course. It can take a set of PLOs and a course description and propose CLOs that connect the two. It can generate formative assessment instruments: practice hypotheticals, short-answer exercises, issue-spotting problems, self-assessment checklists. It can draft rubrics that tie assessment criteria to specific learning outcomes. It can review a set of CLOs across the curriculum and identify gaps in PLO coverage, proposing where alignment is missing or where multiple courses redundantly address the same outcome without progression.
This is the kind of work I described in an earlier post as appropriate for delegation: structured generation of options that a professional then evaluates. The LLM is not deciding what the learning outcomes should be. It is producing drafts that the faculty member refines, challenges, and either adopts or discards.
A concrete example. Suppose a law school has adopted the following PLO:
Graduates will demonstrate the ability to identify and analyze legal issues in novel factual contexts, applying relevant legal rules and policies to reach well-reasoned conclusions.
A professor teaching first-year Torts could prompt an LLM with that PLO, a copy of the course syllabus, and the following instruction:
Draft five specific, measurable course-level learning outcomes for a first-year Torts course that align with the programmatic learning outcome above. Each outcome should use a single action verb from Bloom's taxonomy at the apply, analyze, or evaluate level. The outcomes should be assessable through a formative assessment administered within the first half of the semester.
The model will generate five outcomes. Some will be too broad ("analyze tort claims" is not measurable). Some will be well-calibrated. The professor's job is to read them with the same critical eye she would bring to a student's first draft of a research memo, keep the ones that reflect what she actually teaches, and revise the rest. The LLM did the generative labor; the professor exercised the judgment.
The same pattern applies to formative assessment design. A professor who has never written a formative assessment — and many law professors have not, because the traditional curriculum did not require one — can describe her course's learning outcomes to an LLM and ask for three assessment instruments that test student attainment of those outcomes, along with model answers and a rubric. The model will produce instruments of varying quality. The professor evaluates them against her knowledge of what her students need and what her course actually covers. This is a faster path to a usable formative assessment than starting from scratch, particularly for a faculty member whose expertise is in tort law, not instructional design.
Curriculum mapping is another area where the generative capacity is valuable. A school that has 150 courses, each with four or five CLOs, needs to determine which PLOs each set of CLOs advances, whether any PLOs lack adequate curricular support, and whether the progression from introductory to advanced courses reflects appropriate scaffolding. An LLM can read the full set of CLOs and PLOs and produce a draft curriculum map, flagging apparent gaps and redundancies. That draft will contain errors: the model may treat two differently worded outcomes as distinct when they address the same competency, or it may miss a genuine gap because it interprets a vague outcome too generously. But the draft gives the assessment committee a starting point, and a starting point is what most schools lack.
What the model cannot do
Everything in the previous section involves generating options. The judgment calls that determine whether those options are any good remain human.
The first and most consequential judgment is at the programmatic level: which PLOs represent this school's educational philosophy? The ABA provides minimum competencies under Standard 302(a), and schools can (and should) build on those. But the decision about whether a school's PLOs emphasize practice-readiness, public service, interdisciplinary thinking, or some other dimension of legal education is an institutional choice that reflects the school's identity, its student body, its relationship to the profession, and its honest assessment of what it does well. An LLM prompted to "generate PLOs for a law school" will produce a set of outcomes that sound like every other law school's PLOs, because the model draws on patterns in its training data: law school websites, accreditation documents, and curriculum guides that already exist. The result will be generic, and generic PLOs produce generic curriculum maps, which is how schools end up with compliance artifacts that satisfy auditors and tell them nothing about their own educational program.
The second judgment call is whether a particular course outcome is honest. A CLO that says "students will evaluate the constitutional implications of administrative agency rulemaking" is measurable and well-formed. Whether it accurately describes what actually happens in a given professor's Administrative Law course is a different question: one that requires knowing what the professor teaches, how the course is structured, what students do in it, and whether the assessment instruments would actually reveal whether students can perform the stated task. An LLM has no access to any of that. A CLO drafted by a model and adopted without critical review is a statement about what a course aspires to teach, not what it does teach, and the gap between the two is exactly what Standard 315 is designed to surface.
The third judgment involves formative assessment calibration. Standard 314 requires feedback that allows students to evaluate their performance relative to the course learning outcomes. That means the formative assessment must test what the outcomes describe, at a level of difficulty appropriate for students at that point in the curriculum, and the feedback must be specific enough to be actionable. An LLM can generate a practice hypothetical and a model answer. Whether the hypothetical tests the right skills at the right level for the students actually sitting in the room is a question only the professor can answer.
And then there is the sycophancy problem. I have written at length about the tendency of large language models to affirm the user's stated position rather than challenge it. This tendency is directly relevant to curriculum design work. A faculty committee that uses an LLM to evaluate whether its curriculum is well-aligned to its PLOs will tend to get the answer it wants: yes, your curriculum is well-aligned. The model will find connections between CLOs and PLOs that a more skeptical reviewer would characterize as attenuated, and it will understate gaps that a rigorous assessment would flag. If you use an LLM to audit your own curriculum, you should prompt for disagreement in the same way I recommended for legal analysis: "Identify every PLO that lacks adequate curricular support across our course offerings. For each gap, explain why the existing CLOs are insufficient and what a course would need to include to address the deficiency." The framing that presupposes gaps will produce better analysis than the framing that asks whether gaps exist.
A practical workflow
What follows is the approach I would recommend for a school that needs to move from its current state to substantial compliance by August 2026. It draws on the specification-first methodology and context management strategies I have described in prior posts.
Start with a program specification document. Before anyone opens a chatbot, the relevant committee should produce a concise description of the school's educational mission, the current PLOs (or the draft PLOs the committee is considering), the structure of the required curriculum, the number and categories of elective offerings, and any institutional priorities that should shape the outcomes. This document becomes the task specification that anchors every subsequent LLM conversation.
Draft and refine PLOs first. Use the LLM to generate candidate PLOs, but provide it with the program specification, any relevant accreditation guidance (including the Standard 315 report template), and a clear instruction to produce outcomes that are specific enough to be assessed. The committee reviews, debates, and finalizes the PLOs before moving to the course level. No CLO can be well-aligned to a PLO that has not yet been defined.
Generate CLOs course by course. For each course, provide the LLM with the finalized PLOs, the course syllabus or description, and any information about how the course is actually taught. Ask for draft CLOs that are specific, measurable, and aligned to identified PLOs. Start a new conversation for each course: the OTOC rule applies here because each course has distinct content and the model should not carry assumptions from the previous course's CLOs into the next. Faculty review the drafts against their own knowledge of the course, revise as needed, and confirm that the CLOs reflect what the course actually teaches rather than what it aspires to teach.
For required courses, coordinate CLOs across sections early. Standard 302(c) requires uniformity: all sections of Contracts must share the same minimum learning outcomes. The LLM can generate a proposed set of uniform CLOs, but the faculty who teach those sections must agree that the outcomes describe what every section covers. This is a faculty governance question, not a drafting question, and no amount of generative AI will resolve a disagreement among three Contracts professors about what their course is for.
Design formative assessments for first-year courses. Once CLOs are finalized, use the LLM to generate candidate formative assessments: practice problems, short analytical exercises, issue-spotting tasks, reflective prompts. Ask for model answers and rubrics tied to the specific CLOs. Evaluate whether the assessments actually test the stated outcomes at an appropriate level of difficulty. The model will produce serviceable first drafts of assessment instruments far faster than most faculty could create them from scratch; the faculty member's role is quality control and calibration.
Build the curriculum map. With CLOs in hand, the LLM can propose an alignment matrix: which courses advance which PLOs, at what level, and with what assessment methods. Run this in a separate conversation with the full set of PLOs and CLOs as input. Then run an adversarial session: "Identify every PLO that is inadequately supported by the current curriculum. For each, identify what is missing and what curricular change would address it." Compare the two outputs. Where the model's curriculum map says alignment is strong and the adversarial session says it is weak, that discrepancy warrants human attention.
The distinction that determines whether this works
The revised standards changed what law schools must document. AI can accelerate the documentation. But the standards also changed what law schools must think about: whether their courses collectively produce the graduates they claim to produce, whether their assessments measure what their outcomes describe, and whether their curriculum has the structural coherence that a set of independently designed courses taught by independently minded professors does not guarantee.
An LLM can draft a learning outcome. It cannot tell you whether the outcome is true. That judgment requires knowing what happens in the classroom, what the students bring to it, and what the profession needs from the graduates who leave it. The schools that use AI to generate outcomes and adopt them uncritically will end up with syllabus language that reads well and describes nothing. The schools that use AI to generate options and then do the hard work of evaluating those options against their actual educational program will end up with something more valuable than compliance: a curriculum they understand.
This post draws on the text of ABA Standards 302, 314, and 315 (revised February 2025; effective August 2025; implementation required by the 2026-2027 academic year); the ABA Standard 315 report template (draft released November 4, 2025); Susannah Pollvogt's overview and four-step plan for LSAC; the Indisputably analysis of revised Standard 314; and Attorney at Law Magazine's reporting on the deans' letter. The framework for delegating generative tasks to AI while retaining professional judgment builds on approaches described in prior posts on judgment delegation, context management, sycophancy, and specification-first methodology.