Last week we heard the latest installment in the prophesized AI jobs apocalypse. This time, it was Dario Amodi, the CEO of Anthropic, who told Axios that “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years” (italics original). Axios adds: “Imagine an agent writing the code to power your technology, or handle finance frameworks and analysis, or customer support, or marketing, or copy editing, or content distribution, or research. The possibilities are endless — and not remotely fantastical. Many of these agents are already operating inside companies, and many more are in fast production …. Make no mistake: We've talked to scores of CEOs at companies of various sizes and across many industries. Every single one of them is working furiously to figure out when and how agents or other AI technology can displace human workers at scale. The second these technologies can operate at a human efficacy level, which could be six months to several years from now, companies will shift from humans to machines.” The piece then argues that this will be different from previous technological disruptions because of the speed with which it will occur.
Someone should tell that to the workers placed out of work all-but overnight by the development of machinery in the nineteenth century, as detailed by Marx (who helpfully notes in the Machinery chapter in Capital that the drive to full, steam-engine driven automation is motivated by the inability of capitalists to extract any more surplus value from over-exploited workers). One should also remember, with Jathan Sadowski, that these sorts of proclamations are in part designed to create their own reality, such that “the power of expectations can have a disciplining effect on what people think” and that “the capitalist system is designed to pummel us into submission, preventing us from imagining life could be any other way, let alone allowing us to go on the offensive” (The Mechanic and the Luddite, 196, 207). When Axios adds that “this will likely juice historic growth for the winners: the big AI companies, the creators of new businesses feeding or feeding off AI, existing companies running faster and vastly more profitably, and the wealthy investors betting on this outcome,” one can thus hardly be too surprised.
Here, I want to take a slightly different angle however, and think a little bit about the kinds of jobs that are supposed to go away. It’s hard not to notice the parallels between the Axios list and this one:
“We have seen the ballooning of not even so much of the ‘service’ sector as of the administrative sector, up to and including the creation of whole new industries like financial services or telemarketing, or the unprecedented expansion of sectors like corporate law, academic and health administration, human resources, and public relations.”
That list is about the jobs that appeared when automation took away factory jobs, and it’s a list of jobs that even those who do them often cannot explain why they exist or need to be done – what David Graeber called “Bullshit Jobs.”
After reflecting a bit on the moralizing source of this phenomenon – “work has dignity!” – and the psychological damage it does to be told that work has dignity when you know your job is pointless, Graeber concludes:
“If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it's hard to see how they could have done a better job. Real, productive workers are relentlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the universally reviled, unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc.)—and particularly its financial avatars—but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value. Clearly, the system was never consciously designed. It emerged from almost a century of trial and error. But it is the only explanation for why, despite our technological capacities, we are not all working 3–4 hour days”
In a paper in the inaugural issue of the Journal of Adorno Studies, Fabian Freyenhagen, Anastasios Gaitanidis, and Polona Curk reflect on Adorno’s thoughts on psychoanalysis. For Adorno, the 19th century repressive regime (where my socially unacceptable urges have to be sublimated into something productive) has been replaced at the social level by a sick normality; in Adorno’s words from Minima Moralia, “the regular guy, the popular girl have to repress not only their desires and insights, but even the symptoms that in bourgeois times resulted from repression” (MM sec. 36, qt. p. 20). As Freyhagen et al summarize, for Adorno, “the punitive superegos had tended to give way to weak egos, in which even the symptoms of the repression of desires are repressed because conflict itself is prevented by the available prefabricated gratifications” (23). They offer an updated diagnosis:
“Today’s predominant modes of existence embody — literally, rather than consciously —the awareness that one’s importance as an individual to the economy of unhinged shareholder capitalism is gone. The experience of the majority is one of increased replaceability, even disposability. As a result, one’s individuality is gradually dismantled. Persons feel like ghosts, zombies, neither alive nor dead. The mode of existence in higher echelons is hardly better. Fancying themselves more important on the individual level, they are submitted to 24hr-availability, maddening competition, and a perpetual push towards self-improvement to increase their value to the system, until they are, ultimately, only able to keep going with the help of extreme distractions and various addictions. They too sense that there is no safety net, leaving everyone a misstep away from seeing their quantifiable “value” in the neoliberal system depreciated” (24).
Under these circumstances, “therapy itself becomes about management of this state, of self-structure, of perpetual and continual crises: an active intervention rather than psychoanalytic work proper.” Whether or not this is a good reading of Adorno’s reading of the value of psychoanalysis, I want to notice that it does read like a psychoanalytic account of the sort of damage that knowing your job is bullshit can do. In other words, Freyhagen et al’s account is deeply congruent with Graeber’s.
This leads to something of a paradox. The jobs that AI is supposed to do are largely bullshit jobs, and the proliferation of those jobs is both damaging to any notion of dignified work and the product of previous advances in automation. As Graeber says, “it's as if someone were out there making up pointless jobs just for the sake of keeping us all working.” We can plausibly expect that AI is going to be pretty good at those bullshit jobs, because an awfully high percentage of its training data consists in the effluvia of bullshit jobs, and (let’s face it) a lot of those jobs are basically the same.
So what happens when we learn how to automate not just making stuff, but bullshit? One scenario, is that the jobs apocalypse isn’t coming in the manner advertised. On this scenario, it turns out that people are better at bullshitting than even machines that are known for their bullshitting. At least some bullshit cannot be merely adequate! There’s also growing support for regulations that keep “humans in the loop” of AI systems, even when it’s pretty clear that the humans are there either as liability sponges (people to blame when the systems fail) or just to make sure somebody has a job. Evidence in support of this sort of hypothesis is that AI superintelligence isn’t, and that a lot of AI hype is based around the desire to raise venture capital (Sadowski is very good on this point). The actual systems themselves underperform, are still based on crappy training data, hallucinate all the time, and so on.
Another scenario, visible only if you ignore the marketing hype around AI and realize that it depends on a vast sector of exploited humans to prop up the systems with such tasks as data labeling and reinforcement learning, is that sector of precarious folks will expand. Given that the economy depends on consumption, one assumes that there will be more bullshit to do, fairly quickly, and that those assigned to do it will need even more active intervention. Those jobs will likely become more precarious.
A better idea, which (therefore) seems a lot less likely to actually happen, is suggested by Amodi: “every time someone uses a model and the AI company makes money, perhaps 3% of that revenue ‘goes to the government and is redistributed in some way.’” (UBI?) That’s actually an idea worth thinking about, but since it would be bad for the Silicon Valley capitalists, we can safely assume it won’t come about unless they perceive that the alternative is their utter destruction. That might happen, because widespread unemployment tends to fuel authoritarian populism (good news – AI can facilitate authoritarianism, so crisis averted!)
The one thing that won’t happen is that anybody will get to work 3 or 4 hour days. That’s because “these technologies are not merely corrupted by global capitalism, they are created out of it, to serve its interests – that is, the interests of the people who control the capital needed to build, scale, and use these technologies in really meaningful ways” (Sadowski, 121). To find somebody who aspires to use the productivity gains of machinery and automation to free up people for leisure and higher pursuits, you should probably start with Marx.