Matthew Crawford, writing in The Hedgehog Review on how LLMs (i.e., large language models such as ChatGPT) are just the latest iteration of a process of “self-erasure,” in which we decline the task of being human and opt instead to be assimilated to larger and larger impersonal forces.
LLMs are a particularly interesting example of this trend, though, because our use of language is a distinctly human enterprise. It is through articulating that we understand the significance of things, events, and even ourselves. As Crawford notes, “we ‘self-articulate’ as part of the lifelong process of bringing ourselves more fully into view‚ how I stand, the particular shape that various universal goods have taken in my own biography, and in my aspirations.” Listen to how he puts it,
LLMs are built on enormous data sets—essentially, all language that is machine-scrapable from the Internet. They are tasked with answering the question, “given the previous string of words, what word is most likely to occur next?” They thus represent what the philosopher Talbot Brewer recently referred to as “the statistical center of gravity” of all language (and I am following Brewer’s lead in viewing LLMs through the lens of Taylor’s account of language). Or rather, all language that is on the Internet. This includes the great literature of the past, of course. But it includes a whole lot more of the present: marketing-speak, what passes for journalism, the blather produced by all who suffer from PowerPoint brain. But put aside the impoverished quality of the language that these LLMs are being trained on. If we accept that the challenge of articulating life in the first person, as it unfolds, is central to human beings, then to allow an AI to do this on our behalf suggests self-erasure of the human.