There is a lot of writing about what AI will do to education. Almost all of it is speculative. This is not that.

This is a framework for thinking about what AI tools currently do when they land in a classroom — the observable effects on student work, teacher workload, and the validity of the assessments we have inherited.

## The three things AI does that matter

When you strip away the marketing, there are three things that current AI tools actually do that matter for classroom practice:

1. It reduces the cost of producing text.

Before AI, producing a competent paragraph about the causes of World War One required a student to either know something or copy it from somewhere visible. AI reduces the knowledge threshold for producing plausible-sounding text. This is not new — the internet did a version of this too — but AI does it more fluently and more invisibly. 2. It changes what "effort" looks like. Some students use AI to avoid thinking. Others use it to extend their thinking — to iterate on ideas, to check their logic, to get feedback at 10pm when no teacher is available. The same tool does both things, and from the outside, the outputs can look identical. 3. It makes existing weaknesses in assessment design visible. An assessment task that could always be completed without thinking hard now has a tool that makes that obvious. AI did not create bad assessment design; it revealed it. <Callout type="note"> This framework is deliberately limited to what we can observe. It does not claim to predict how AI will develop or what schools should do about it long-term. </Callout> ## What this means for assessment design If AI reduces the cost of producing text, then assessments that primarily ask for text production are now measuring something different from what they were designed to measure. This does not mean written assessments are worthless. It means we need to be clearer about what we are actually trying to measure, and whether the task we have designed measures it. A useful diagnostic question: What would a student need to know or be able to do to produce this response, even with AI assistance? If the answer is "not very much," the assessment may have always been measuring effort and compliance more than understanding. AI has just made that legible. ### The revision approach One practical response is to design assessments that include a revision phase with explicit reflection. Ask students to improve a piece of work and explain what they changed and why. This creates a record of thinking that is harder to produce without genuine engagement. ### The conversation approach Another is to include an oral or conversational component — not as a gotcha, but as a genuine extension of written work. "You wrote that X is the most important factor. Walk me through your reasoning." Students who understand what they wrote can do this. Students who did not engage cannot reliably improvise the underlying thinking. ## What this does not mean It does not mean banning AI is the solution. Bans are not enforceable and they shift the teacher's energy toward detection rather than learning. It does not mean AI is the enemy of good teaching. A teacher who is clear about what understanding looks like — and who designs tasks that require students to demonstrate that understanding — will find AI less disruptive than a teacher whose tasks have always been primarily about output. <Pullout> AI did not break assessment design. It revealed which assessments were already broken. </Pullout> ## A practical starting point If you want to audit your own assessment design in light of this, start with one task you are currently using and ask three questions: 1. Could a student produce a passing response to this task without understanding the core concept? (If yes: how?) 2. Would AI assistance make that easier? (If yes: significantly, or marginally?) 3. What evidence would tell you whether a student actually understands? (Does your current task produce that evidence?) This is not a framework for detecting AI use. It is a framework for designing assessment that is valid regardless of what tools students have access to. That is the more durable problem to solve.