AI and Local Knowledge
More thoughts on legal education
Yesterday, I wrote about the existential threat posed by artificial intelligence to the legal academy. One major theme was that, in fact, artificial intelligence will not render human lawyers altogether obsolete, because there will always be a demand for highly specialized “local knowledge” - local to niche legal settings with niche legal problems and relatively insular communities with their own special priorities and interests. And, critically, AI does not operate at the level of the local; it operates at a relatively high level of generality, abstracting away local differences and distinctions. If there is hope for law schools in the age of AI, I argued, one might think that it principally lies in providing this kind of local knowledge. Yet that is exactly the kind of knowledge that law schools cannot teach at scale - which is precisely why I argued that law schools are going to have trouble defending their existence in the not-too-distant future.
Today, I read a long-form post on X/Twitter by Zack Shapiro that amplifies some of these points - in particular, the ongoing value of local knowledge. Shapiro is a lawyer who runs an “AI native law firm”. He observes that lawyers (among others) often complain about the results of AI, when in fact they are simply failing to engage in proper “prompt engineering”. Somewhat startlingly, he suggests that a useful prompt may be about 2000 words long. Why? Because the AI needs a prompt that contains enough domain-specific/local knowledge at the input end that will allow it to provide genuinely useful answers at the output end. In the absence of that highly local/specific input, the AI is simply incapable of yielding more than “vaguely competent” slop.
(I also recommend this short post by Jefferson Ng, who also engages with the Shapiro piece, and has thoughts on the impact of AI on social scientists.)
Consider the kind of information Shapiro thinks is needed to engineer a suitable prompt - essentially everything that might be conveyed by a senior partner handing off a complex file to an associate.
Think about how much context a senior lawyer conveys when handing off a deal. All the “unstated assumptions” that should likely be stated for good measure. The relationship dynamics. The judgment calls about what matters and what’s noise. A typical prompt of mine covers the client’s business model and risk tolerance, the history with this particular counterparty, which deviations from market terms are acceptable, how the comments should read stylistically, what the client’s other outside counsel has historically fixated on. That’s before I get to the specific legal instructions about the document itself.
Local knowledge is what’s needed to make the AI truly work. That is the ‘value added’ by the lawyer. To underscore this point, Shapiro notes that the people best equipped to ‘speak’ to AI won’t necessarily be technologists, but people who possess “deep knowledge” of their fields and clients:
They’re domain experts with a specific and somewhat unusual combination of traits: deep knowledge of their field, the ability to articulate that knowledge with extreme precision, and the discipline to treat every AI interaction as an opportunity to encode their judgment a little further. They are, in short, the people who were already the best at their jobs, plus this one new dimension.
Think about what a 2,000-word prompt actually requires. You need to know your domain deeply enough to identify the issues that matter. You need to understand your client’s situation specifically enough to set the right priorities. You need the linguistic precision to describe all of this without leaving gaps for a literal machine to misinterpret. And then, at the skill level, you need the metacognitive discipline to watch the output, diagnose what’s working and what isn’t, and iteratively refine the system. That combination doesn’t show up on any standard hiring rubric. But it’s the combination that produces 10x output.
As I indicated yesterday, this should give (at least some) lawyers hope. I don’t know that it should give law schools any comfort, since it is exactly this kind of deep, local knowledge that law schools cannot offer at scale. Indeed, given the generic level at which material is taught, and the kind of knowledge that must be demonstrated by students in order to progress and graduate, one might unkindly venture that what we offer is itself a kind of slop. Sigh.
Comments welcome. I have appreciated the supportive remarks on yesterday’s post, as grim as it is. People are clearly interested - if not exactly eager - in having this conversation.


Terrific points. I'm going to stew over them. Thanks so much for this.
I liked the previous post and I like this one too. But Shapiro’s point crystallizes a general worry that I have had about AI. In particular, Shapiro says prompts may need to very long and very specific in order to encode the special knowledge and judgments which good lawyers have and which LLMs lack. I am somewhat skeptical about whether this can really be done easily.
One way of representing the “value proposition” of human lawyers in a world where LLMs are cheaply available for legal tasks is to focus on the local knowledge which humans have but which LLMs, being general or abstract, lack. This is the way you’ve phrased the issue so far. But a different dimension, one which often coexists with local vs general knowledge dimension, is the tacit vs explicit knowledge dimension (we could also phrase this as legibility vs illegibility). The best lawyers I have met—really the best professionals I met—know more than they can say. This knowledge comes out in application, and is often constitutive of what we call “good judgment”. But I don’t think this can be transferred straightforwardly, if it can be transferred at all, whether by professors to students or from humans to LLMs. How this tacit or illegible knowledge develops at all is pretty mysterious in my experience; the best I can say is that it forms from a very large number of examples of a task as well as a large number situations where a well-formed and well-functioning mind applies itself, with feedback, to that particular task.
I think it’s clear that the models of learning where tacit knowledge can be most easily transferred, if it can be transferred at all, look a lot like apprenticeships. Notably, this is a model of education universities, based as they are on mass explicit education, can’t embody.
But it also leads me to wonder whether human collaborations with LLMs that look like the relationships humans have with work partners will be limited in how effective they will be because there will be limits in how much and how well even the best lawyer can convey their tacit understandings to LLMs. It would require an LLM that can continuously learn, ie one where the training stage isn’t distinct from the inference stage, for a human to be able to convey this knowledge to the LLM effectively, much like how a master trains an apprentice. But if the LLMs can learn in this way, the value of the human will dissolve.
(To put some citations on the table, I think LLMs are high modernist technologies, but that means they are subject to the critiques that have been developed by great sociologists like James C. Scott and Michael Polanyi.)