Yuval Noah Harari argues that AI has hacked the operating system of human civilization.

Author: Yuval Noah Harari

Yuval Noah Harari is a historian, philosopher and author of “Sapiens”, “Homo Deus” and the children's series “Unstoppable Us”. He is a professor in the history department at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Artificial intelligence

June 02, 2024

2 Jun, 2024

The Computers that Tell Stories Will Change the Course of Human History, Says Historian and Philosopher

The ears of ARTIFICIAL INTELLIGENCE (AI) have haunted humanity since the very beginning of the computer age. Until now, these fears focused on machines using physical means to kill, enslave, or replace people. But in recent years, new AI tools have emerged that threaten the survival of human civilization from an unexpected direction. AI has acquired some remarkable abilities to manipulate and generate language, whether through words, sounds, or images. In this way, AI has hacked the operating system of our civilization.

Language is the material from which almost all human culture is made. Human rights, for example, are not inscribed in our DNA. They are, rather, cultural artifacts that we create by telling stories and writing laws. Gods are not physical realities. They are, rather, cultural artifacts that we create by inventing myths and writing scriptures.

Money is also a cultural artifact. Banknotes are just pieces of colored paper, and today, more than 90% of money isn’t even in the form of banknotes, but rather just digital information stored on computers. What gives money value are the stories told about it by bankers, finance ministers, and cryptocurrency gurus. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff weren’t particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think of ChatGPT and other new AI tools, they are often drawn to examples like schoolchildren using AI to write their essays. What will happen to the school system when children do that? But this type of question overlooks the big picture. Forget about school essays. Think about the next U.S. presidential race in 2024 and try to imagine the impact of AI tools that could be used to mass-produce political content, fake news, and writings for new cults.

In recent years, the QAnon cult has gathered around anonymous online messages known as “Q drops.” Followers would collect, revere, and interpret these Q drops as a sacred text. While, as far as we know, all previous Q drops were composed by humans and bots simply helped spread them, in the future we could see the first cults in history whose revered texts were written by a non-human intelligence. Throughout history, religions have claimed a non-human source for their sacred books. Soon, that could become a reality.

On a more prosaic level, we could soon find ourselves having long online discussions about abortion, climate change, or the Russian invasion of Ukraine with entities we believe to be human, but which are actually AI. The problem is that it makes no sense for us to waste time trying to change the stated opinions of an AI robot, when AI could perfect its messages with such precision that it would have a strong chance of influencing us.

Thanks to its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews. While there is no evidence that AI has consciousness or feelings of its own, to foster a false intimacy with humans, it’s enough for AI to make them feel emotionally attached to it. In June 2022, Blake Lemoine, a Google engineer, publicly claimed that the LaMDA AI chatbot he was working on had become conscious. The controversial statement cost him his job. The most interesting part of this episode wasn’t Lemoine’s claim, which was likely false, but his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most effective weapon, and AI has just acquired the ability to mass-produce intimate relationships with millions of people. We all know that, in the past decade, social media has become a battleground for controlling human attention. With the new generation of AI, the battleground is shifting from attention to intimacy. What will happen to human society and psychology when AI fights against AI in a battle to simulate intimate relationships with us, which could then be used to persuade us to vote for certain politicians or buy specific products?

Even without creating “false intimacy,” the new AI tools would have an immense influence on our opinions and worldviews. People might come to use a single AI advisor as an all-knowing oracle. No wonder Google is terrified. Why bother searching when I can simply ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can simply ask the oracle to tell me the latest news? And what purpose do ads serve, when I can simply ask the oracle to tell me what to buy?

And these scenarios don’t even fully reflect the big picture. What we are talking about is the potential end of human history. Not the end of history, but the end of the part dominated by humans. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations, such as religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when AI takes over culture and begins to produce stories, melodies, laws, and religions? Previous tools, such as the printing press and radio, helped spread human cultural ideas, but they never created entirely new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, a completely new culture.

At first, AI will probably imitate the human prototypes it was trained on in its early stages, but with each passing year, AI culture will dare to go where no human has gone before. For millennia, humans have lived within the dreams of other humans. In the coming decades, we might find ourselves living within the dreams of an extraterrestrial intelligence.

The Fear of AI Has Haunted Humanity for Decades, But for Thousands of Years, Humans Have Been Haunted by a Much Deeper Fear

For millennia, humans have feared being trapped in a world of illusions. We have always appreciated the power of stories and images to manipulate our minds and create illusions. Consequently, since ancient times, humans have feared being caught in such a world.

In the 17th century, René Descartes feared that perhaps an evil demon was trapping him in a world of illusions, creating everything he saw and heard. In ancient Greece, Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave for their whole lives, facing a blank wall. A screen. On that screen, they see various shadows projected. The prisoners confuse the illusions they see there with reality.

In ancient India, Buddhist and Hindu sages pointed out that all human beings lived trapped in Maya, the world of illusions. What we normally consider reality is often nothing more than fiction in our own minds. People can wage entire wars, kill others, and be willing to die themselves due to their belief in this or that illusion.

The revolution of artificial intelligence is bringing us face to face with Descartes’ demon, Plato’s cave, and the Maya. If we are not careful, we may get trapped behind a curtain of illusions that we cannot tear down or don’t even realize is there.

Of course, the new power of AI could also be used for beneficial purposes. I won’t dwell on this topic because those developing AI talk about it enough. The job of historians and philosophers like me is to point out the dangers. But undoubtedly, AI can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure that the new AI tools are used for good and not for evil. To do so, we must first appreciate the true capabilities of these tools.

Since 1945, we’ve known that nuclear technology can generate cheap energy for the benefit of humanity but can also physically destroy human civilization. That’s why we have restructured the entire international order to protect humanity and ensure that nuclear technology is mainly used for good. Now we must face a new weapon of mass destruction that could annihilate our mental and social world.

We can still regulate the new AI tools, but we must act quickly. While nuclear weapons cannot invent more powerful nuclear weapons, AI can create exponentially more powerful AI. The crucial first step is to demand strict safety controls before powerful AI tools are released into the public domain. Just as a pharmaceutical company cannot launch new drugs before testing their short- and long-term side effects, tech companies should not release new AI tools until they are proven to be safe. We need an equivalent of the Food and Drug Administration for new technologies, and we need it yesterday.

Won’t slowing down the public rollout of AI make democracies fall behind more ruthless authoritarian regimes? Quite the opposite. Uncontrolled AI deployment would create social chaos that would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations depend on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thus destroying democracy.

We have just encountered an extraterrestrial intelligence here on Earth. We don’t know much about it, except that it could destroy our civilization. We should end the irresponsible deployment of AI tools in the public sphere and regulate AI before it regulates us. And the first regulation I would suggest is to make it mandatory for AI to disclose that it is AI. If I am having a conversation with someone and cannot distinguish whether it is a human or AI, that is the end of democracy.

This text was generated by a human.


Sources

Yuval Noah Harari is a historian, philosopher, and author of “Sapiens”, “Homo Deus”, and the children’s series “Unstoppable Us.” He is a professor in the Department of History at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Artificial Intelligence – THE ECONOMIST. Original in English.

Translated by the Future Lab Team.

Autor: Yuval Noah Harari

Autor: Yuval Noah Harari

Yuval Noah Harari is a historian, philosopher and author of “Sapiens”, “Homo Deus” and the children's series “Unstoppable Us”. He is a professor in the history department at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!