alex wennerberg

AI can sort of code, can't write

I started a professional sabbatical in January. For the month of March, I wrote basically full-time. This is possibly the first time in my life that I have spent so much time doing so much writing, and it's been one of the more enjoyable experiences of my life. I spent about ten years working in the tech industry and it took me a few months to almost completely detach myself from it.

It's difficult not to think about my life and relationship to technology lately without considering the ongoing revolution in artificial intelligence, especially living in San Francisco, where it's hard to go a day outside without overhearing someone discussing it (I myself have been the perpetrator of this at times). A lot of discussions seem to miss the mark. Initiatives like AI 2027 imagine, for example, some future where these chatbots have magical "superhuman" powers that take over the world.

For me, AI, and the discourse around it, has been a great clarifying force in my life. Experimenting with Claude for coding was actually really nice — it made so many toilsome tasks very easy and, with some coaxing, allowed me to build incredible things with software, very quickly. More than that, it made me realize how much I usually don't enjoy writing code on modern software systems. Many people lament that AI is replacing some aspect of their work, but for me, it was liberatory: there are so many miserable aspects to working with computers that I just don't have to worry about anymore.

This isn't to say I am uncritical of these technologies. As I wrote in my previous blog post, AI doesn't really possess a sense of craft. But it helped clarify my relationship to software, one which may be far narrower in my life than it was before. I realized that the vast majority of tasks in professional coding are artless nonsense, and that the space for craft in computing may actually be quite narrow, and very different from the kind of work I was doing before. I've developed a renewed interest in "recreational programming". YouTube channels like InkBox or Kay Lack restore an artfulness to programming that my primary encounter and work in a corporate environment on modern machines lacked. These channels, and others, are especially interested in "low-level" assembly programming. When writing in assembly, you are speaking the language of the machine. It has a different sort of feeling, you're not working with human language (the domain of expertise of LLMs), you are directly speaking the language of hardware. LLMs can produce assembly, of course, but they can't quite reproduce the satisfying experience of building "something from nothing" that assembly programming uniquely has.

I've also been interested in the community disassembly of Pokemon Red. Hobbyist programmers have reverse engineered many Pokemon games so that you can read, while not the original code, something that produces the exact same game. I find this experience very moving — imagining a small team of programmers 30 years ago wrangling Z80 assembly for six years into a game that ultimately revived a dying console and produced a $100 billion media empire. Even before AI, that kind of story was entirely foreign to the contemporary experience of programming, which largely involves shuffling data from one place to another for some advertiser-funded SaaS platform. The kind of craft involved in producing the first Pokemon games was already lost to "higher level" language, but it was AI that inspired me to revere this kind of programming. Z80 assembly was already "useless" and "obsolete", but that doesn't diminish its specialness, and in a post-agentic world, it may be a much better language to restore a non-AI "recreational" and "human craft" attitude towards programming than say, Python.

AI has also re-inspired my relationship with writing. I've experimented with using these tools as part of my research, drafting, and editing process, and they are basically entirely useless, probably even net harmful. Unlike coding, where agents are increasingly extremely effective, AI contributes almost nothing to the endeavor of creative or expressive writing, and I am very confident that AI will never replace humans in this domain. Recent reporting about writers using AI have been scandalous in a way that, to most people, AI code generally is not (though some developers view agentic coding as having an irreversible tainting effect on codebases). In the realm of software, code has some clear purpose, and there's nothing wrong with "stealing" other people's code (this is basis of open source). But writing is different — stealing and relying on AI for writing does defeat the purpose of non-technical writing, because the whole purpose of the endeavor is that you are considering and expressing something uniquely yours.

In both writing and code, AI serves to clarify why we are doing some activity, and what makes it particularly human. AI advocates sometimes draw graphs of "human capabilities" which are being increasingly "replaced" by AI — forecasting that eventually machines will be able to do "everything" that a human can do, and more. This completely misunderstands the nature of human experience. Computers, for example, have been unfathomably better than human beings at doing complex arithmetic for quite some time, but that did not make them "super-human".

Some imagine intelligence as a "scale" (it's worth noting the recent obsession with IQ among tech people) and ask us to imagine a superintelligent AI having the same relationship to us as we do to dogs, fish, or ants. This completely misunderstands what it is to actually be a human, or animal. Experience is not "ranked", but rather particular. I recently watched the Pixar movie Hoppers, where the main character inhabits a robot that allows her to live among animals and communicate with them. Apparently, Pixar put a lot of effort into getting beaver behavior right in the film. Beaver researcher Emily Fairfax served as a scientific consultant, and a lot of her ideas were worked into the plot. When Fairfax was asked what she would do if she could communicate with beavers, she had a list of questions for them, such as why they make dams, whether they like certain other animals that they interact with regularly, or whether, for example, they think it smells nice when flowers grow on their dams. What was interesting is, even for someone who spends her life studying these so-called "lower" animals, there is so much she doesn't know about them, especially about their experience of the world, which remains inscrutable and mysterious.

I've been reading a lot of French philosopher Maurice Merleau-Ponty lately. For Merleau-Ponty, experience is fundamentally embodied and worldly. We are not, in his view, a rational mind which encounters the world through thought and detached experience, rather we are our worldly, perceptual bodies. An AI system lacks a body and a world, if it has a "body" in any sense, it is the body of text in its corpus. It does not experience the world as we do, through our senses, in our particular corporeal forms, and it never will. Thus, it can never "replace" or "exceed" humanity, much like we do not "replace" beavers, who have their own distinct world no poorer than our own, unique to themselves in fascinating and mysterious ways. We probably actually understand beavers less than we do about AI agents, despite their "capabilities" being allegedly so much "higher".

The reduction of human experience to "capabilities" comes from a machinic vision of what human beings are. When humans, or animal life, is reduced to a quantity of useful material, that becomes our totalizing model for life, something we can reduce experience to. But this reduction only occurs in systems of measurement, quantification, and discipline (like by the state, or within a capitalist firm) — real human life as it is actually lived far exceeds what is captured by these models. Writing is a particular example of this. AI optimists imagine that someday writing will be "solved" by AI. But neither writing, nor the more "craft"-centered vision of coding I articulate, are "capabilities" of human beings — they are undirected, interminable, and personal activity. This isn't a return to some "romantic" concept of human experience, rather, it's the simple observation of what human beings are: not machines, but bodies, "always already" entangled in a world, in relation to others in their own distinct ways, that an AI system never will be.