Last week, the Princeton professor D. Graham Burnett wrote an essay for the New Yorker about his fascination with how generative AIs and LLMs are changing the way his students interact with and ingest information. The first half of the piece is interesting, a sort of interrogation of how the technology is changing behavior on campus and how students and university administrators are trying to outmaneuver one another. I was slightly surprised to hear that the ChatGPT IP address is blocked on some university networks, but we were using VPNs to download virus-packed torrents on campus internet in 2009 so I can’t imagine what the kids are able to do now.
Towards the middle of the piece, though, Burnett starts engaging in another sort of curiosity. It’s a much more weepy, credulous kind of analysis. Here’s how he talks about one student named Paolo who is investigating the prowess of an LLM to write music:
When Paolo asked if [the LLM] could have an emotional relationship to a song, the system carefully distinguished between recognizing emotion in music and actually feeling it. It said it lacked a body, and that this absence barred it from certain ways of knowing music. Paolo asked it to write a song that would make him cry.
It tried. Paolo sent me a note: “The system failed the test.”
But I was crying, there on the couch, reading.
When I read this essay for the first time, I admit I was sort of taken with Burnett’s descriptions of how an LLM could perform exceptional levels of analysis on esoteric topics. That’s genuinely exciting to me, since the internet has ruined my attention span it would be nice to have a robot read, like, a bunch of essays on Heidegger and give me the play by play. But on a second reading a couple of days later, the whole thing felt treacly, like someone seeing god in a magic trick.
Burnett’s essay is part of an emerging collection of writing I call AI Romanticism. There is a techno-optimism in it, sure, but rather than focusing on the ability for platforms like ChatGPT to eliminate busywork — it’s pretty good at this FWIW; I just used Gemini to write a bunch of form letters for a mortgage application — AI Romantics attach spiritual and intellectual meaning to the outputs of a weighted prediction algorithm. They say that these things “think” or that they “tell.” Burnett, to his credit, caveats his wonder pretty extensively.
The A.I. tools my students and I now engage with are, at core, astoundingly successful applications of probabilistic prediction. They don’t know anything—not in any meaningful sense—and they certainly don’t feel. As they themselves continue to tell us, all they do is guess what letter, what word, what pattern is most likely to satisfy their algorithms in response to given prompts… The results are stupefying, but it’s not magic. It’s math.
That is a pretty clean definition, but then for some reason he quickly doubles back a couple of paragraphs later. “The current systems can be as human as any human I know, if that human is restricted to coming through a screen (and that’s often how we reach other humans these days, for better or worse),” he writes. Holding both of those understandings — that this is a machine; that this machine is basically human — simultaneously is one of the odder manifestations of this emerging tradition. This is not a mathematician going into a flow state writing equations on a blackboard, but a humanist seeing an algorithmic output and wondering if we’ve discovered a new life form.
One point I will reliably say when AI comes up in conversation to the point I think I’ve started annoying my friends is that these algorithms cannot invent anything new because they are written to revert to a mean. Yes, you can fuck with the temperature sliders and make them give you funkier outputs, but you’re only influencing predictability. Letting an LLM go nuts and give you a bunch of low-likelihood outputs in a string is cool, but it’s not inventing anything new.
Most people don’t use LLMs that way, of course. They want the most predictable thing because that’s what’s going to get you a B in AP English Lit or write moderately effective lines of code. But while querying an LLM with the right syntax and structure is a skill in itself, I worry that engineering good prompts is going to replace people’s abilities to do any of this shit themselves. Telling an LLM to spit out an essay on the themes of resistance and masculinity in Billy Budd and Bartleby is not a new mode of being, it’s an erosion of understanding and ability where the most likely endpoint is that culture gets even more boring because everything is a prompt away.
That is not limited to the humanities either. Something like vibe coding, where engineers use LLMs to write code and then use LLMs to troubleshoot the code when they run into error messages, is one of those things that drives me insane because eventually there’s going to be a moment where engineers just cease to know how to fix something without an LLM. Skills aren’t a fast-replenishing resource either. The aircraft mechanic pool has been shrinking the last few years; it will take a long time to replenish it while we deal with the consequences in the mean time.
For writers and writing, though, my biggest worry has always been that 1) people will be satisfied with average writing, which are all that LLMs are capable of and 2) that people will forget how to write full-stop. Burnett’s essay resolves into something of a polemic towards the end and he starts writing out staccato paragraphs laced with italic emphasis. It outlines an unfamiliar and uncomfortable alternative to learning as I know it, though I admittedly sympathize with Burnett’s allergy to education-as-economic-output.
We have, in a real sense, reached a kind of “singularity”—but not the long-anticipated awakening of machine consciousness. Rather, what we’re entering is a new consciousness of ourselves. This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise. These systems have the power to return us to ourselves in new ways.
Do they herald the end of “the humanities”? In one sense, absolutely. My colleagues fret about our inability to detect (reliably) whether a student has really written a paper. But flip around this faculty-lounge catastrophe and it’s something of a gift.
You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it. What, again, is education? The non-coercive rearranging of desire.
The reflexivity that closes that first paragraph is like a black hole: so dense and all-encompassing but ultimately full of nothing. Burnett attempts to tell us what self he wants us to return to, but in doing so he preemptively lops off the rhetorical impact of his statement around desire that follows, the idea that education is the “non-coercive rearranging of desire.” Desire emerges from experience. In the humanities, the purpose of LLMs and AIs and all these machines are to be shortcuts around experience. They are not rewiring desire, but, rather, burying it under predictable outcomes. There is nothing less interesting than being told what you want. Finding out for yourself is ecstatic.
I've long said by removing the dwindling economic value surrounding art or humanistic efforts, the AI slop merchants end up counterintuitively returning the humanities to their rightful place. There is no greater illustration for the singular value of art than the 1.) empty and mediocre soulless fluff that machines give us - we now see why the process matters and. 2.) re-affirms that the humanities are constituted as such because of / and for their human root. (It's literally in the name, folks!) In some ways, we have already always been slopped to the gills (slop jobs, slob data informing slop decisions, slop relations all for a full-on slop economy). AI comes in historically right as we reached a kind of exhaustion with hyper financialized mid-brow and, of course, was enabled by the surveillance state degradation to reduce 90% of online information exchange to SEO. My radical turn, where I don't expect too many to follow me, is that this will actually result in a renaissance for cultural institutions which now are rediscovered for the purpose for which they were always built: to be mindful regulators of humans' organizational drive to create meaning in and above their individual concerns.
This is spot on. To your point about Heidegger (or whoever, really), the idea that you can just get the bullet points and understand it is the danger, at least to me. There are some things that can't be summarized--the unfolding of the argument itself is the point, and if you just get the takeaway, then you won't really understand it. I spend most of my time with people who use AI pervasively (college students) and this is the thing they fail to grasp. They don't get that how something is expressed is inseparable from what it's expressing.