Advertisement

Opinion: What’s behind the AI boom? Exploited humans

A human hand touching a finger of one of two robotic hands.
A visitor touches a humanoid robot hand at an AI exhibition booth during the the World Artificial Intelligence Conference in Shanghai on July 4.
(Andy Wong / Associated Press)
Share via

Today we are in the middle of a hype cycle in which companies are racing to integrate artificial intelligence tools into products, transforming fields including logistics, manufacturing and healthcare. The global AI market was worth approximately $200 billion in 2023 and is expected to grow more than 20% each year to nearly $2 trillion by 2030. There are no exact numbers of how many workers participate globally in the industry, but the figure is in the millions. And if trends continue at their current rate, their number will expand dramatically.

When we use AI products, we are inserting ourselves into the lives of these workers . Many tech companies present a vision of their products as shining, autonomous machines — computers teaching themselves as they go from huge quantities of data — rather than the reality of the labor that trains them and is managed by them.

Journalist Cody Delistraty explores AI and other methods of dealing with grief. But he doesn’t question what we lose with these outlets, and what comfort community can bring.

The illusion of autonomous AI has historical roots. In the late 18th century, a supposedly automated chess-playing machine, the “Mechanical Turk,” was developed to impress the empress of Austria. The inventor of the machine, Wolfgang von Kempelen, claimed it could play chess automatically of its own accord, but hidden inside the box was a human chess master who operated it through a series of levers and mirrors. Today’s AI benefits from a similar illusion. Sophisticated software functions only through thousands of hours of low-paid, menial labor — workers forced to work like robots in the hopes that AI will become more like a human. Amazon even coined the term “artificial artificial intelligence” to describe this process of keeping human labor integrated into seemingly automated processes.

Advertisement

From a labor perspective, AI is really an “extraction machine.” Beneath the polished exterior of our devices lies the complex network of components and relationships necessary to power it. When AI breaks down or does not function properly, human workers are there to step in and assist algorithms in completing the work. When Siri does not recognize a voice command or when facial recognition software fails to verify a person’s identity, these cases are often sent to human workers to establish what went wrong and how the algorithm could be improved.

Interviewing workers from Kenya, Uganda, Ireland, Iceland, the United Kingdom and the U.S., we found that the rise of AI is being powered by data annotators, content moderators, machine learning engineers, data center technicians, writers and artists. In Kenya and Uganda, we spoke to dozens of data annotators working 10-hour days for less than $2 an hour performing repetitive, mind-numbing tasks with no opportunities for career progression. In Ireland, a prominent voice actor recounted discovering a synthesized version of her own voice produced without her knowledge using AI tools. In Iceland, we visited data center workers who documented the energy-intensive nature of these centers, which consume more electricity than Icelandic households combined.

The future isn’t handing over the keys to computers but, rather, making use of what they can do that we can’t and adequately funding these efforts.

And facilitating AI systems is not the only way the technology extracts from workers. When AI is put into action, particularly in the workplace, it is often through management systems that centralize knowledge of the labor process and reduce the level of skill required to do a job by routinizing and simplifying it. Such systems extract more effort from workers by forcing them to work harder and faster for their employers’ benefit. In the United Kingdom, we interviewed warehouse workers whose working lives were governed by a complex series of AI systems that reduced them to human automatons. For many of us, this will be how we are most exposed to the damage caused by the extraction machine. We might not become content moderators anytime soon, but the same machine that entraps data annotators affects other jobs too.

Advertisement

The widespread adoption of current AI tools may feel inevitable. But that doesn’t have to preclude a fairer and more just future of work. Four principles should drive the expansion of AI.

First, we need to build and connect organizations devoted to exercising the collective power of workers. This entails not only institutionalizing local unions and worker associations, but also fostering a truly transnational workers’ struggle with campaigns and organizations that cross national boundaries and connect the struggles of blue- and white-collar workers throughout the global production networks of AI.

California lawmakers are trying to get ahead of AI in the workplace, but are already playing catchup

Second, because AI is often embedded in consumer goods and services, there are important openings for civil society and social movements to exert pressure on companies to guarantee minimum standards of pay and conditions for all workers throughout the supply chain.

Advertisement

Third, because some companies will be able to inoculate themselves against consumer pressure, governments should establish regulations to mandate minimum working standards for all workers. The risk for governments in regulating companies that profit from outsourced jobs is that work can quickly flow away to other companies and corners of the planet. Thus we need global agreements that set minimum standards, such as an International Labour Organization convention, which covers and updates fundamental principles and rights at work. We also need to examine the potentials of regulation such as the European Union’s supply chain directive, which will mandate that large companies impose ethical and environmental standards across supply chains.

Fourth, there is a need for more expansive worker-led interventions not just to build collective power, but also to explore meaningful ways of implementing workplace democracy. Those could include cooperatives where workers collectively own and manage an organization, or company boards with half the seats reserved for workers of the company who share equal governance power on the board.

Should these four attempts at “rewiring the machine” be successfully put into practice, global capitalism might still stand in the way of improving the lives of AI’s global workforce. Faced with restrictions of the kind listed above, capitalists in the past have threatened to withdraw investment unless they can negotiate more favorable conditions. But this shouldn’t stop us working together to defend workers’ rights against the latest generation of venture capital-funded tech titans.

James Muldoon is an associate professor in management at the University of Essex. Mark Graham is the director of Fairwork and the professor of internet geography at the Oxford Internet Institute. Callum Cant is a senior lecturer in management at the University of Essex. They are authors of the forthcoming “Feeding the Machine: The Hidden Human Labor Powering A.I.,” from which this piece is adapted.

Advertisement