The author of Sapiens spoke with the press in Spanish to present Nexus, his new book about the dangers that might come. He explained why it is different from all previous technologies, what its autonomy would imply, how it threatens privacy, and what it could do to our psychology and social structures.
In his new book, Nexus, Yuval Noah Harari warns about the dangers of Artificial Intelligence (AI), which is “an agent,” meaning an entity capable of acting in the world, no longer just a tool of humans. “It is an independent agent,” he repeated in a conversation with journalists from Latin America and Spain. “That’s why it’s different from any previous technology we’ve invented.” Like if the atomic bomb could decide where it falls and improve its own technology by itself.
AI can. “It starts by producing texts, images, computer code. And ultimately, it could create a more powerful AI,” he explained. An explosion of AI that would be beyond human control.
“People in the sector are trapped in this arms race mentality, which is extremely dangerous,” he stressed. The idea of developing AI as fast as possible and then, along the way, addressing problems as they arise, seems absurd to him. “It’s like someone put a car without brakes on the road and told you: ‘We’re focused on making it go as fast as it can, and if there’s any problem on the way, we’ll figure out how to invent brakes and install them,’” he joked. “It doesn’t work like that with cars.” And it’s unlikely to work with AI.
AI, an intelligence that is not based on carbon like the human brain, but on inorganic materials, is free from many of the organic limitations of our neurons. “Silicon chips can generate spies who never sleep, bankers who never forget, and despots who never die,” warned the professor who teaches history at the Hebrew University of Jerusalem.
Of course, he acknowledged that AI “has enormous positive potential.” Perhaps not too far off in the 21st century, there could be a revolution in healthcare, he gave as an example: “Today, there’s a shortage of doctors in many countries, but we could have an unlimited number of them, with much more knowledge than any human doctor, updated daily with all the findings from research around the world. We could have them with us 24/7 giving us advice tailored to our individual biology. This is something human doctors can’t do, and it will be much cheaper than a human doctor.”
And yet, he chose to focus Nexus on “the dangerous side of AI.” The reason is simple, he explained: “We have all these extremely rich and powerful corporations flooding people with positive stories, optimistic predictions, and ignoring the dangers. So it becomes the job of philosophers, historians like me, to shed light on the other side.” As a scientist, it would never occur to him to oppose the development of knowledge. “I just say we need to invest more in safety. Make sure this technology is safe, something that is common sense in all other industries.”
ChatGPT is the amoeba: what will the AI dinosaur be like?
Harari emphasized the importance of the latest AI having gained the ability to create stories. “I know many people say it writes texts that aren’t very good, creates music that isn’t very good, produces images with mistakes like people with six fingers. But these are just the first steps of AI. We haven’t seen anything yet.”
He estimated that the AI revolution is about 10 years old and proposed a comparison with biological evolution: today’s AI systems are just amoebas. They are very, very simple AIs. Remember that it took amoebas billions of years to evolve into dinosaurs, mammals, and humans, because organic evolution is slow. But digital evolution is much faster. ChatGPT, the AI amoeba, won’t take billions of years to evolve into the AI dinosaur. It could take only 10 or 20 years. What would the AI T-Rex be like? What could it do?
What large language models (LLMs) do today is not an expanded version of the autocomplete function in Google Search, insisted the Israeli thinker. “It can create entire paragraphs, stories, and essays that are full of mistakes, but they make sense. This is something that is hard for humans: as a university professor, I read a lot of papers written by students who struggle to write a coherent essay that builds an argument by connecting different ideas. AI can already do this. Right now. Where will it be in five or ten years?”
It is also hard to imagine what impact it could have, from the most superficial to the most intimate aspects, on people’s lives, who until now have lived “protected in the cocoon of human culture,” defined Harari. “All the stories, all the music, all the poems, all the images were products of the human mind. Now, more and more of these cultural artifacts will be the product of an alien intelligence. What will this do to human society? To human psychology? No one knows.”
The Totalitarian Dream
In Homo Deus, Harari had already spoken about some of the risks that new information technologies pose to humanity. But by creating an agent capable of independence, no longer just a tool subject to human will, AI opens unexplored doors. Especially at a time of global crisis deeply defined by asymmetries and polarization. “The conversation is breaking down,” he noted. “People can’t agree on the most basic facts, people can no longer have a rational conversation.” And Harari can’t help but think that this happens right after “these tech giants created incredibly sophisticated information technologies that, as they promised us, were going to connect us.”
Not being deterministic—”everything depends on the decisions we make,” he said on more than one occasion—Harari believes it is essential “to understand that AI has a totalitarian potential like we’ve never seen before.”
Unlike authoritarian regimes, which control the political sphere but leave the individual alone most of the time, totalitarianism needs to know what every person is doing every minute of the day. “Stalin in the Soviet Union and Adolf Hitler in Germany didn’t just want to control the army and the budget: they wanted to control every aspect of life, the entirety of people’s lives, every moment. What you hear, what you see, what you say, who you meet. But Hitler and Stalin had limits when it came to controlling their subjects because they couldn’t follow everyone all the time.” AI can. It doesn’t need to rest, eat, it doesn’t want to go out with a partner or take vacations in the mountains.
Even if Hitler or Stalin had three intelligence agents for every citizen, who would read and process all those reports, three per day, about each one? “That information is just the foundation for the totalitarian regime, someone needs to read all the papers, analyze them, and find patterns.” AI won’t let the dossiers gather dust in an office. “AI could make it possible to create regimes of total surveillance that would annihilate privacy,” Harari summarized. “In an AI country, you don’t need human agents to follow everyone everywhere: you have smartphones, facial recognition, and computers. And you don’t need human analysts to review all the information: AI can review vast amounts of information (videos, images, texts, audio), analyze it, and recognize patterns.”
Less Naivety and More Compassion
Nexus revisits ideas from Sapiens and 21 Lessons for the 21st Century to remind us that cooperation enabled Homo sapiens to become what it is today. “The main argument of this book is that humanity gains enormous power by building large networks of cooperation, but the way these networks are built predisposes us to make reckless use of that power.”
A central theme is what he calls “the semi-official ideology of the age of computing and the internet,” according to which, more information equals more knowledge. This fallacy, or “naive vision, which dominates places like Silicon Valley,” confuses information with truth. But information is raw material: “Truth is a rare subset within information.” The majority of the world’s information, he emphasized, “is garbage, not truth.” It’s very easy to create and spread false information. Instead, “truth is costly, it requires time, money, and effort.”
The naive vision is, at its core, curiously anti-scientific. After all, according to this perspective—illustrated in the book—a racist is an ill-informed person who just needs more data about biology and history. This, as we know, doesn’t hold up in real life.
“This naive vision justifies the pursuit of increasingly powerful information technologies” and the release of brake-less cars on the road. At the same time, it perniciously complements the more cynical view of humanity, a conception where “historically, extreme right and extreme left meet,” he opined. “They share a deep distrust of the institutions that are the guarantors of truth. What we hear both from the extreme right and the extreme left is the suspicion of all the institutions that were traditionally established by human society to identify and promote truth, from the media to universities, including the courts.”
Why? “Both the far right and the far left share a very cynical view of the world, according to which the only reality is power: human beings are only interested in gaining power and all human interactions are power struggles.”
To end on a kinder note, Harari reminded us that it is not the only perspective. “We should remind ourselves that there is a more compassionate view of humans. Not everyone is obsessed with power. Not every time someone tells me something are they trying to manipulate me. Does corruption exist? Yes, and for that, we have several institutions that balance each other. But the idea that all journalism is just an elitist cabal to manipulate people, that all science is just a conspiracy. This cynical view is destroying trust and democracy.”
Yuval Noah Harari is a historian, philosopher, and author of Sapiens, Homo Deus, and the children’s series Unstoppable Us. He is a professor in the Department of History at the Hebrew University of Jerusalem and a co-founder of Sapienship, a social impact company.
0 Comments