Discussion about this post

User's avatar
Michael Plaxton's avatar

Terrific points. I'm going to stew over them. Thanks so much for this.

Sam Waters's avatar

I liked the previous post and I like this one too. But Shapiro’s point crystallizes a general worry that I have had about AI. In particular, Shapiro says prompts may need to very long and very specific in order to encode the special knowledge and judgments which good lawyers have and which LLMs lack. I am somewhat skeptical about whether this can really be done easily.

One way of representing the “value proposition” of human lawyers in a world where LLMs are cheaply available for legal tasks is to focus on the local knowledge which humans have but which LLMs, being general or abstract, lack. This is the way you’ve phrased the issue so far. But a different dimension, one which often coexists with local vs general knowledge dimension, is the tacit vs explicit knowledge dimension (we could also phrase this as legibility vs illegibility). The best lawyers I have met—really the best professionals I met—know more than they can say. This knowledge comes out in application, and is often constitutive of what we call “good judgment”. But I don’t think this can be transferred straightforwardly, if it can be transferred at all, whether by professors to students or from humans to LLMs. How this tacit or illegible knowledge develops at all is pretty mysterious in my experience; the best I can say is that it forms from a very large number of examples of a task as well as a large number situations where a well-formed and well-functioning mind applies itself, with feedback, to that particular task.

I think it’s clear that the models of learning where tacit knowledge can be most easily transferred, if it can be transferred at all, look a lot like apprenticeships. Notably, this is a model of education universities, based as they are on mass explicit education, can’t embody.

But it also leads me to wonder whether human collaborations with LLMs that look like the relationships humans have with work partners will be limited in how effective they will be because there will be limits in how much and how well even the best lawyer can convey their tacit understandings to LLMs. It would require an LLM that can continuously learn, ie one where the training stage isn’t distinct from the inference stage, for a human to be able to convey this knowledge to the LLM effectively, much like how a master trains an apprentice. But if the LLMs can learn in this way, the value of the human will dissolve.

(To put some citations on the table, I think LLMs are high modernist technologies, but that means they are subject to the critiques that have been developed by great sociologists like James C. Scott and Michael Polanyi.)

No posts

Ready for more?