
Daron Acemoglu, the Nobel laureate in Economics, an AI heretic: Is the future of employment at risk?
The recently awarded Nobel Prize in Economics, Daron Acemoglu, thought like most people: that artificial intelligence, like other technological advances, had come to improve everyone’s life.
However, after carefully studying the consequences of other technological advances, he came to the conclusion that AI could end up harming a large part of society.
Acemoglu discovered that robots were destroying jobs and lowering wages. “That opened my eyes. People thought it wouldn’t be possible for robots to have such negative effects,” he says.
Acemoglu discovered that robots were destroying jobs and lowering wages. “That opened my eyes. People thought it wouldn’t be possible for robots to have such negative effects,” says Simon Simard.
Yes, there are a few nuts out there who believe AI will end humanity. But since the explosive emergence of ChatGPT last year, the main concern for most of us has been whether these tools will soon write, program, analyze, brainstorm, compose, design, and leave us unemployed. In response, Silicon Valley and big companies have been strangely united in their optimism. Yes, some people may lose out, they say. But don’t worry. AI will make us more productive, and that will be good for society. Ultimately, technology always is.
As a journalist who has been writing about technology and economics for years, I too have been optimistic. After all, I was supported by a surprising consensus among economists, who typically don’t agree on such basic things as what money is. For half a century, economists have venerated technology as an unequivocally positive force. Normally, according to the “dark science,” giving one person a bigger piece of the economic pie requires giving a smaller piece to the poor sap next door. But for economists, technology was different. Invent the steam engine, the car, or TikTok, and voilà! The pie magically grows, allowing everyone to enjoy a bigger slice.
“Economists saw technological change as something amazing,” says Katya Klinova, head of AI, work, and economics at the nonprofit Partnership on AI. “How much do we need? As much as possible. When? Yesterday. Where? Everywhere.” Resisting technology was invoking stagnation, poverty, darkness. Countless economic models, as well as all of modern history, seemed to demonstrate a simple and irrefutable equation: technology = prosperity for all.
There’s just one problem with that formulation: it’s turning out to be wrong. And the economist who is sounding the alarm the most (the heretic who argues that AI’s current trajectory is far more likely to harm us than help) is perhaps the world’s leading expert on the effects of technology on the economy.
Daron Acemoglu, an economist at MIT, is so prolific and respected that he had long been considered one of the top contenders for the Nobel Prize in Economics, which he finally won in 2024. Acemoglu had also been optimistic about technology. But now, with his old collaborator Simon Johnson, Acemoglu has written a 546-page treatise that debunks the vision of the Church of Technology, demonstrating how innovation often ends up being detrimental to society.
In their book Power and Progress, Acemoglu and Johnson showcase a series of major inventions over the past 1,000 years that, contrary to what we’ve been told, did not improve the lives of most people and sometimes even made them worse. And in the periods when major technological advances did lead to widespread good (the examples that AI optimists cite), it was only because ruling elites were forced to share the benefits of innovation. It was the struggle for technology, not technology itself, that ended up benefiting society.
“The prosperity of the past is not the result of any automatic and guaranteed gains from technological progress. We are the beneficiaries of progress, primarily because our predecessors made progress work for more people,” Acemoglu and Johnson write.
Today, at this peak moment of AI, will everyone benefit from the advancement, or will it end up being detrimental to society? Over the course of three conversations this summer, Acemoglu told me that he worries we are rushing down a path that will end in catastrophe. Around him, he sees a torrent of warning signs, of the kind that, in the past, ended up favoring a few at the expense of the many. Power concentrated in the hands of a handful of tech giants. Technologists, CEOs, and researchers focused on replacing humans instead of empowering them. An obsession with employee surveillance. Unprecedented low unionization. Weakened democracies.
What Acemoglu’s research (what history tells us) shows is that technology-driven dystopias are not a rare science fiction oddity. In fact, they are much more common than we think.
“It’s very likely that, if we don’t correct our course, we will have a real two-tier system. A small number of people will be at the top (designing and using these technologies), and a very large number of people will be relegated to marginal or insignificant jobs,” Acemoglu believes. The result, he fears, is a future of lower wages for most.
Acemoglu shares these grim warnings not to urge workers to completely resist AI, nor to resign ourselves to counting the years until our economic doom. The expert sees the possibility of a positive outcome, but only if workers, policymakers, researchers, and perhaps even some tech magnates can manage it. Given how quickly ChatGPT has spread in the workplace (81% of large companies surveyed already say they are using AI to replace repetitive work), Acemoglu urges society to act quickly. And the first task is a tough one: to un-program all of us from what he calls the “blind techno-optimism” propagated by the “modern oligarchy.” “This,” he says, “is the last opportunity for us to wake up.”
Acemoglu, 56, lives with his wife and two children in a quiet, affluent suburb of Boston, Massachusetts. But he was born 8,000 kilometers away, in Istanbul, Turkey, in a country mired in chaos. When he was three years old, the military took control of the government, and his father, a leftist professor who feared for his family, burned his books. The economy collapsed under the weight of triple-digit inflation, crushing debt, and high unemployment.
When Acemoglu was 13, the military detained and judged hundreds of thousands of people, torturing and executing many. Watching the violence and poverty around him, Acemoglu began to question the relationship between dictatorships and economic growth, a question he couldn’t study freely if he stayed in Turkey. At 19, he left to study in the United Kingdom. At the strangely early age of 25, he completed his PhD in Economics at the London School of Economics.
After moving to Boston to teach at MIT, Acemoglu quickly made a name for himself in the field he had chosen. To date, his most cited paper, written with Johnson and another former collaborator, James Robinson, addresses the question he had asked as a teenager: Do democratic countries develop better economies than dictatorships? It’s a huge, difficult question to answer because poverty might lead to dictatorship, not the other way around. So Acemoglu and his co-authors employed an ingenious solution.
They analyzed European colonies with high mortality rates, where history has shown that power remained concentrated in the hands of a few colonists willing to face death and disease, versus colonies with low mortality rates, where an influx of settlers boosted property rights and political rights that curbed state power. The conclusion: colonies that developed what they called “inclusive institutions” (which fostered investment and imposed the rule of law) ended up wealthier than their authoritarian neighbors.
In their ambitious and expansive book Why Nations Fail, Acemoglu and Robinson reject the idea that factors such as culture, climate, and geography made some countries rich and others poor. The only factor that really mattered was democracy.
The book was an unexpected bestseller, and economists regarded it as a paradigm shift. But Acemoglu was also pursuing another line of research that had fascinated him for some time: technological progress. Like almost all his colleagues, he began as a staunch techno-optimist. In 2008, he published a textbook for graduate students that supported the orthodoxy that technology is always good. “I followed the canon of economic models, and in all of them, technological change is the main driver of GDP per capita and wages. I didn’t question them,” Acemoglu admits.
But as he thought more about it, he began to wonder if there was something more to it. The first turning point was a paper he worked on with economist David Autor. It contained a surprising chart that traced the earnings of American men over five decades, adjusted for inflation. During the 1960s and early 70s, wages for all groups increased in parallel, regardless of education. But then, around 1980, wages for those with higher education skyrocketed, while wages for high school graduates and dropouts plummeted.
Something was worsening the lives of less-educated Americans. Was it technology?
Acemoglu had a hunch that it was. With Pascual Restrepo, one of his students at the time, he started thinking about automation as something that does two opposite things simultaneously: it takes tasks away from humans, and it creates new ones. According to his theory and Restrepo’s, the fate of workers largely depends on the balance between these two actions.
When the new tasks created compensate for the ones taken away, workers do well: they can access new jobs, which are usually better paid than the old ones. But when the tasks taken away outnumber the new ones, displaced workers have nowhere to go. In a subsequent empirical paper, Acemoglu and Restrepo showed that this was exactly what had happened. During the four decades following World War II, the two types of tasks balanced each other. But in the next three decades, the tasks taken away outnumbered the new ones by a wide margin.
In summary, automation was two-sided. Sometimes it was good, and sometimes it was bad.
It was the bad side that continued to convince economists. So, Acemoglu and Restrepo, in search of more empirical evidence, focused on robots. What they found was astonishing: since 1990, the introduction of each robot reduced employment by about 6 people, while wages decreased. “It opened my eyes. People thought it wouldn’t be possible for robots to have such negative effects,” says Acemoglu.
Many economists, clinging to technological orthodoxy, dismissed the effects of robots on human workers as a “transitory phenomenon.” In the end, they insisted, technology would turn out to be good for everyone. But to Acemoglu, that viewpoint seemed unsatisfactory. Could something that had been happening for 3 or 4 decades really be considered “transitory”? According to his calculations, robots had displaced over half a million people in the U.S.
Perhaps, in the long run, the benefits of technology would reach most people. But, as economist John Maynard Keynes said, “in the long run, we are all dead.”
Acemoglu discovered that robots destroyed jobs and lowered wages. “It opened my eyes. People thought it wouldn’t be possible for robots to have such negative effects,” he says.
So Acemoglu set out to study the long term. First, he and Johnson looked at the course of Western history to see if there were other times when technology failed to live up to its promises. Was the recent era of automation, as many economists assumed, an anomaly?
Acemoglu and Johnson discovered that it wasn’t. Consider, for example, the Middle Ages, a period often seen as a technological wasteland. But the Middle Ages saw a series of innovations, including heavy plows with wheels, mechanical clocks, spinning machines, smarter crop rotation techniques, the widespread adoption of wheelbarrows, and increased use of horses. These advances made agriculture much more productive. But the reason we remember this period as the Dark Ages is precisely because the advances never reached the peasants who did the actual work. Despite all the technological advances, they worked longer hours, became increasingly malnourished, and probably lived shorter lives.
The surpluses created by the new technology went almost exclusively to the elites sitting at the top of society, such as the clergy, who used their newfound wealth to build towering cathedrals and consolidate their power.
Or think about the Industrial Revolution, which techno-optimists happily point to as the prime example of the benefits of innovation. In reality, the first long phase of the Industrial Revolution was disastrous for workers. The technology that mechanized spinning and weaving destroyed the livelihoods of artisans, handing textile jobs to unskilled women and children who were paid lower wages and had virtually no bargaining power. People crowded into cities to work in factories, living next to cesspits of human waste, breathing coal-polluted air, and being defenseless against epidemics like cholera and tuberculosis, which often wiped out their families. They were also forced to work longer hours while real incomes stagnated. “I have toured the war fronts of the peninsula,” lamented Lord Byron before the House of Lords in 1812. “I have been in some of the most oppressed provinces of Turkey; but never, under the most despotic of infidel governments, did I witness such squalid misery as I have seen since my return, in the very heart of a Christian country.”
If the average citizen didn’t benefit, where did all the wealth generated by the new machines go? Once again, it was hoarded by the elites: the industrialists. “Normally, technology is controlled by a fairly small number of people who mainly use it for their own benefit.”
“That’s the great lesson of human history,” notes Johnson.
But Acemoglu and Johnson acknowledge that technology hasn’t always been bad: sometimes it’s been almost miraculous. In England, during the second phase of the Industrial Revolution, real wages shot up by 123%. The average workday was reduced to 9 hours, child labor was reduced, and life expectancy increased. In the U.S., during the postwar boom from 1949 to 1973, real wages grew nearly 3% per year, creating a stable middle class. “There has never been, to our knowledge, another period of prosperity so rapid and shared,” write Acemoglu and Johnson, reaching back to the ancient Greeks and Romans. It is episodes like these that made economists so fervently believe in the power of technology.
What separates good technological times from bad ones? That’s the central question that Acemoglu and Johnson tackle in Power and Progress. Two factors, they say, determine the outcome of new technology. The first is the nature of the technology itself: whether it creates enough new tasks for workers to offset the ones it takes away.
The first phase of the Industrial Revolution, they claim, was dominated by textile machines that replaced spinners and weavers without creating enough new jobs for them, condemning them to unskilled jobs with lower wages and worse conditions.
In the second phase of the Industrial Revolution, by contrast, steam locomotives displaced drivers, but they also created a multitude of new jobs for engineers, construction workers, ticket sellers, porters, and the managers who supervised them. These were often highly skilled and well-paid jobs. And by reducing the cost of transportation, the steam engine also helped expand sectors like the metal casting industry and retail, creating jobs in those areas as well.
“What’s special about AI is its speed. It’s much faster than previous technologies. It’s omnipresent. It’s going to be applied in practically every sector. And it’s very flexible.”
The second factor that determines the outcome of new technologies is the balance of power between workers and their bosses. Without sufficient bargaining power, Acemoglu and Johnson argue, workers can’t force their bosses to share the wealth generated by new technologies. And what determines the level of bargaining power is closely tied to democracy.
Electoral reforms (driven by the Chartist labor movement in 1830s Britain) were crucial for the Industrial Revolution to shift from bad to good. As the right to vote spread, Parliament became more responsive to the needs of the general population, passing laws to improve healthcare, crack down on child labor, and legalize unions.
The growth of organized labor, in turn, laid the groundwork for workers to win higher wages and better working conditions from their bosses in response to technological innovations, along with guarantees of retraining when new machines took over their old jobs.
In normal times, these reflections might seem purely academic, another debate about how to interpret the past. But there is one point on which both Acemoglu and the tech elite he criticizes agree: today, we are in the midst of another technological revolution with AI.
“What’s special about AI is its speed. It’s much faster than previous technologies. It’s omnipresent. It’s going to be applied in practically every sector. And it’s very flexible. All of this means that what’s being done now with AI might not be right, and if it’s not, if it takes a harmful direction, it could spread very quickly and become dominant. So I think there’s a lot at stake,” says Acemoglu.
Artificial Intelligence:
Acemoglu acknowledges that his views are far from the consensus in his profession. But there are signs that his thinking is starting to have a broader impact in the battle of opinions over AI. In June, Gita Gopinath, who is second-in-command at the International Monetary Fund, gave a speech urging the world to regulate AI in a way that would benefit society, citing Acemoglu. Klinova, from the Partnership on AI, claims that senior figures from major AI labs read and comment on his work. And Paul Romer, who won the Nobel Prize in 2018 for his work showing how crucial innovation is for economic growth, says he has undergone his own change in thinking, reflecting Acemoglu’s.
“It was an illusion of economists, including myself, who wanted to believe that things would turn out well on their own. What I’m increasingly clear about is that that’s not the case. It’s obvious, in hindsight, that there are many forms of technology that can do a lot of harm, and many that can be enormously beneficial. The problem is having some entity act on behalf of society as a whole and say: ‘Let’s do the beneficial ones, let’s not do the harmful ones,'” says Romer.
Romer praises Acemoglu for challenging the prevailing opinion. “I really admire him, because it’s easy to be afraid of straying too far from the consensus. Daron is brave for being willing to test new ideas and pursue them regardless of what others think. There’s too much clustering around a narrow set of possible viewpoints, and we really need to stay open to exploring other possibilities,” he adds.
Earlier this year, just weeks before everyone else, a research initiative organized by Microsoft gave Acemoglu early access to GPT-4. While testing it, he was amazed by the responses he received from the bot. “Every time I had a conversation with GPT-4, I was so impressed that I’d end by saying, ‘Thank you.’ It definitely goes beyond what I would have thought feasible a year ago. I think it shows great potential to do a lot of things,” he says.
But the first experiments with AI also revealed its flaws. He doesn’t believe we are near the moment when the software can do everything humans do, a state that computer scientists call general artificial intelligence. As a result, he and Johnson don’t foresee a future of mass unemployment. People will continue working, but with lower wages. “What worries us is that the skills of a large number of workers will be much less valuable. So their income won’t be sustained,” he reflects.
Acemoglu’s interest in AI predates the popularity of ChatGPT. In part, thanks to his wife, Asu Ozdaglar, who heads the MIT Department of Electrical Engineering and Computer Science. Thanks to her, he received early training in machine learning, which was making it possible for computers to perform a broader range of tasks. As he delved deeper into automation, he began to wonder about its effects not only on factory jobs but also on office jobs.
“Robots are important, but how many workers do we have left? If you have a technology that automates knowledge work, office work is going to be much more important for this next stage of automation,” he says.
In theory, automation might end up being good for office workers. But right now, Acemoglu worries it might end up being negative because current society doesn’t have the necessary conditions to ensure that new technologies benefit everyone. First, thanks to decades of attacks on organized labor, only 10% of the workforce is unionized in the U.S., a historic low.
Without bargaining power, workers won’t be able to influence how AI tools are implemented at work, or who shares the wealth they create. And second, years of misinformation have weakened democratic institutions, a trend that will likely worsen with deepfake tools.
Furthermore, Acemoglu worries that AI isn’t creating enough new jobs to compensate for the ones it’s eliminating. In a recent study, he found that companies that hired more AI specialists over the last decade hired fewer employees overall. This suggests that, even before the ChatGPT era, employers were using AI to replace workers with software, rather than using it to make humans more productive, just as they had done with earlier forms of digital technologies.
Companies, of course, always advocate for cost-cutting and short-term profits. But Acemoglu also blames the AI research field for focusing on worker replacement. Computer scientists, he notes, judge their AI creations by whether their programs can achieve “human parity,” that is, completing certain tasks as well as people.
“For people in the field and the ecosystem in general, judging these new technologies by their ability to resemble humans has become second nature. This creates a very natural path toward automation and replicating what humans do, and often not enough on how they can be more helpful to humans with skills very different from those of computers,” Acemoglu argues.
Acemoglu continues to explain that creating tools that are useful for human workers, rather than tools that replace them, would benefit not only workers but also their bosses. Why focus so much energy on doing something humans already do reasonably well, when AI could help us do things we’ve never been able to do before?
.
Translated by Cristina Gálvez.