Yuval Harari, interviewed by the IMF:“Artificial Intelligence is the most powerful technology ever created”

Yuval Harari, interviewed by the IMF:“Artificial Intelligence is the most powerful technology ever created”

Yuval Harari, interviewed by the IMF:
“Artificial Intelligence is the most powerful technology ever created”

Press contribution from the International Monetary Fund (IMF). Finance & Development magazine.

The Israeli historian and writer, one of the leading scholars of the impact of AI on human evolution, warned in an interview that the new technology will strip people of the exclusive ability to impose emotions through the transmission of narratives.

According to Yuval Harari, the impact of Artificial Intelligence on humanity will bring such profound consequences that, in the not-so-distant future, humans will have to relinquish authorship of the ability to influence others through their narratives — something that has always allowed them to dominate the planet.

The Israeli historian and writer, who in his book Nexus explained that the world is leaving behind the money-based economy to replace it with an information-based economy, was interviewed on a podcast produced by the International Monetary Fund (IMF). A summarized version of that interview was published in Finance & Development (F&D), an IMF publication. The following is the full text of the interview:

Unlike the Homo economicus — the hyper-rational model invented to elucidate our financial dilemmas — the decisions of Homo sapiens have always depended greatly on social context and emotional responses to narratives.

Curious since childhood, Yuval Noah Harari writes today about human evolution as a philosopher and historian. Sapiens: A Brief History of Humankind, published in 2014, became an international phenomenon translated into almost 40 languages. His latest work, Nexus: A Brief History of Information Networks from the Stone Age to AI, examines the evolution of human communication networks and the possibility that Artificial Intelligence (AI) will surpass us on our own turf.

Harari is currently a professor of History at the Hebrew University of Jerusalem and a prominent senior researcher at the Centre for the Study of Existential Risk at the University of Cambridge. In a conversation with Bruce Edwards, he spoke about narratives, trust, and AI.

F&D: One of the basic principles of your history of Homo sapiens is that we are the only species with the ability to imagine the future. How has storytelling allowed us to dominate other species evolving alongside us?

YNH: Power lies in cooperation. For example, chimpanzees can only cooperate in very small groups, but Homo sapiens‘ cooperation is unlimited. Today, there are 8 billion people in the world who, despite many differences and conflicts, belong almost without exception to the same commercial networks. Much of the food, energy, and clothing we consume comes from the other side of the world, from people we’ve never met. These vast cooperation networks are our superpower, and they are based on trust.

Then we must ask where trust between strangers comes from. From stories.

Trust is forged by telling stories that many people believe in. It’s easiest to see in religion: millions of strangers can cooperate in charitable projects like building hospitals or fighting holy wars because they believe in the same mythology. But the same happens with the economy and the financial system, because no story has ever been as successful as the story of money. Basically, it is the only story that everyone in the world believes in.

F&D: But you refer to money as a mere cultural artifact.

YNH: Exactly. Money is a story, an invention; it has no objective value. You can’t eat or drink banknotes or coins, but you can give a stranger a worthless piece of paper in exchange for bread that you can eat. The fundamental premise is that we all believe in the same narrative about money; if we stop believing, everything collapses. This has happened throughout history, and it is happening today with new types of currencies. What are Bitcoin, the Ethereum network, and all these cryptocurrencies? They are narratives. Their value depends on the stories people tell and believe. And Bitcoin’s value rises and falls as people’s trust in that narrative rises and falls.

F&D: According to your latest book, Nexus, we are leaving the money economy for an economy based on information exchange, not currency. What is the information economy like?

YNH: I’ll give you an example: one of the most important companies in my life is Google. I use it every day, all day. But my bank statement shows no money exchanged; neither do I pay Google, nor does Google pay me. What Google gives me is information.

F&D: And you give Google information.

YNH: Exactly. I give Google a lot of information about what I like, what I don’t like, what I think — anything — and Google uses it. All over the world, more and more transactions follow this format of information for information, not something for money. And power, wealth, and the meaning of wealth are shifting from having a lot of money to having a lot of petabytes of information. What happens when the most powerful people and companies are rich in the sense that they have a gigantic amount of stored information, which they don’t even bother to monetize because they can get everything they want in exchange for information? Why would we need money? If information serves to buy goods and services, money becomes unnecessary.

F&D: Nexus posits that power structures and belief systems emerged from narratives throughout human evolution and contextualizes this with current technology. What does it say about the dangers of these increasingly advanced information networks?

YNH: The first message is almost philosophical: information and truth are not the same thing. Most information is fictional, implausible, and misleading. Truth is expensive; you need to research, gather data, and invest time, effort, and money to find it. And often, the truth hurts — that’s why it is a very small part of information.

Another message is that we are unleashing the most powerful technology ever created on the world: AI. AI is radically different from the printing press, the atomic bomb, or any other invention. It is the first technology in history that can make decisions and create new ideas on its own. An atomic bomb cannot decide where to detonate; AI can. It can make financial decisions and invent financial instruments independently — and the AI we know today, in 2024, is just the rudimentary form of this revolution. We have no idea what is coming.

One important point, especially for the IMF, is that the pioneers of AI are just a handful of countries. Most countries are far behind, and if we are not careful, we will see a repetition of the Industrial Revolution on an exponential scale. In the 19th century, only a few countries — Britain, then the United States, Japan, and Russia — led industrialization, while most others didn’t understand what was happening. Within decades, the entire world was either directly conquered or indirectly dominated by those few industrial powers.

Now we have the AI tsunami. Think about what the steam engine and telegraph did to global inequality — then multiply it by 10, 100, or 1,000. That’s the kind of impact we could see if a few countries monopolize the enormous power of AI, leaving the rest exploited and dominated in unprecedented ways.

F&D: Unchecked AI is dangerous, as you say in Nexus. But as you emphasize in Sapiens, humanity has trampled the planet like “gods who don’t know what they want.” Is there anything in economics capable of mitigating the impact of these two potentially destructive forces combined?

YNH: Economics seeks to establish priorities. Since there are limited resources and an abundance of different desires and needs, it raises the question of truth and desire.

The best system we’ve invented to address desires is democracy: we ask people what they want. However, democracy is not ideal for deciding what is true. If we want to know if the atmosphere is warming due to human activity or natural cycles, the answer does not come from a democratic vote. It is a matter of truth, not desire.

If we want to know the facts, we need expert institutions that know how to analyze data — but not dictate desires or tell us what to do.

F&D: But democratic decisions are based on stories people hear: what happens when those stories no longer come from humans?

YNH: It causes an earthquake. Societies are built on trust, which is based on information and communication. AI-generated stories will profoundly shake that trust.

AI can be enormously beneficial, but if it runs unchecked, it could pose an existential danger. I don’t see AI as Artificial Intelligence, but rather Alien Intelligence — not from outer space, but from our own laboratories. It thinks and makes decisions in fundamentally different ways than humans. Letting billions of alien agents loose without ensuring they use their power for our benefit is extremely dangerous.


Press contribution from the International Monetary Fund (IMF). Finance & Development magazine.

Yuval Noah Harari: “Most of the information is not true, it’s garbage.”

Yuval Noah Harari: “Most of the information is not true, it’s garbage.”

Was Deckard a replicant? That’s the great debate surrounding Blade Runner. The protagonist of the film, inspired by Philip K. Dick’s novel Do Androids Dream of Electric Sheep? (1968), keeps the mystery surrounding his character, a debate that brings us to an important question decades later: What makes us human? According to Omar Hatamleh, Director of Artificial Intelligence at NASA’s Goddard Space Flight Center, in 50 years it will be nearly impossible to distinguish a humanoid robot from a person. The Voight-Kampff test, designed to distinguish between humans and replicants, will be useless to Harrison Ford. The threat of AI is so great that in March 2023, Yuval Noah Harari, along with Elon Musk and Apple co-founder Steve Wozniak, signed a manifesto calling for a six-month halt to the “out-of-control race” of ChatGPT. A year and a half has passed, and not only has it not been stopped, but AI has hit the accelerator pedal to the floor. Are we in time to do something to prevent AI from destroying us, or is it a lost battle? Who can kill the algorithm? In his new book, Nexus (Debate), philosopher Yuval Noah Harari — the author of Sapiens, a work that has sold 25 million copies since its 2013 publication — recounts how different societies and political systems have used information to achieve their goals and impose order, for good and for ill. Harari focuses his work on the crucial moment we face today, when non-human intelligence threatens our very existence.

EVERYONE TALKS, NO ONE LISTENS: TOTALITARIANISM, LIES, AND CONSPIRACY THEORIES

In Nexus, Yuval Noah Harari analyzes the role of information networks from the Stone Age to the rise of AI. At the beginning of his intervention, the historian pointed out the contradiction in which our society lives: “We have the most important information technology in history, but people seem unable to talk.” In this regard, the author of Homo Deus: A Brief History of Tomorrow used the example of what is happening in the United States, the impossibility of Democrats and Republicans agreeing on anything. This ideological clash is evident in social media, and Harari commented on this issue, saying: “The easiest way to capture attention is to press the button of hatred or fear and deliberately spread conspiracy theories.” He also stated, “The big tech giants promised us that they would connect everyone and spread the truth. We have very sophisticated technology to communicate, but we do not converse.” For Harari, the essential difference between democracy and a totalitarian regime is “conversation,” which is currently in grave danger due to the crisis of journalism and the spread of “lies and conspiracy theories” on these social networks.

In connection with the above, Harari discussed how wrong it is to equate information with truth: “Information is not knowledge. The naive view that dominates places like Silicon Valley holds that the more information, the more people know. But most of the information in the world is trash. It is not true. True information is scarce because writing an authentic report requires time, money, and effort. Writing a lie, false information is simple; you don’t need to invest anything. Fiction is cheap, and truth is usually complicated. Most people prefer simple stories. If we want the truth to prevail, we have to invest in it. We need newspapers, societies, and academic institutions. This is the great responsibility of current societies: to resist that naive view, like the one from people like Elon Musk, that if there is more information, people will have more chances of knowing the truth.”

The Israeli historian warned during the press conference about the “totalitarian potential” of artificial intelligence and provided several examples, such as his own country, Israel, “which is building a total surveillance regime” in the occupied territories. According to the author, AI’s “immense capacity” to collect and analyze information will make possible “the complete surveillance regime that eliminates privacy,” something that Hitler and Stalin did not achieve but that countries like Iran are now beginning to implement, with facial recognition cameras that monitor and punish women’s failure to wear a veil when traveling in a vehicle. These video surveillance devices locate women driving without a veil, identify them, their name, their phone number, and immediately send an SMS informing them of the infraction and the punishment: they must leave their car, which has been confiscated from that moment on. This is not a science fiction scenario. It is happening now. Although he pointed out the dangers that link AI and totalitarianism, Yuval Noah Harari made an important clarification by noting that this is not deterministic: societies can make decisions to reverse it.

THE RESPONSIBILITY OF BIG CORPORATIONS

Considering that the AI revolution is in its early stages, Harari emphasized the need to ask questions. “It has enormous positive potential. If I speak little about it, it’s because there are large, extremely rich, and powerful corporations that are already flooding us with positive messages and ignoring the dangers,” he said. For this reason, he sees it as crucial to generate “a debate about the responsibility of media giants” like Facebook, Twitter, Instagram, or TikTok. “Corporations must be held responsible for what their algorithms decide, just as the editor of the New York Times is responsible for its front page,” he pointed out. Another important issue being debated right now in countries like Spain is how to distinguish between censorship and the mechanisms of platforms to prevent the spread of lies and hoaxes. The philosopher commented, “It’s important that people know how to see the difference between human editors and corporate algorithms. People have the right to stupidity, to tell lies, except in extreme cases defined by the law; it’s part of freedom of speech,” he said. “The problem is not the humans but the algorithms of corporations whose business model is based on engagement, which means that their priority is for people to spend as much time as possible on their platforms to sell ads and gather data they can sell to third parties. We must be very careful about censoring human users,” he concluded.

THE END OF THE WORLD WE HAVE KNOWN

Although the book takes us through history from antiquity to the present day, much of the presentation focused on AI and its impact on our societies. Yuval Noah Harari explained what makes Artificial Intelligence different from any other technology we have known: “AI is different because it is not a tool, it is an agent, an independent agent. Atomic weapons have great power, but that power is in the hands of human beings. The bomb itself cannot develop any military strategy. AI is different; it can make decisions on its own.” Harari gave the example of what is happening in the media and social networks with the rise of AI: “In a newspaper, the most important decisions are made by the editor, he is the one who decides what goes on the front page, what will be the cover. Now, in some of the world’s largest platforms, like Facebook and Twitter, the role of the editor has been replaced by AI. It is the algorithms that decide which story is recommended and which one is placed at the top of the news feed. And AI can also come up with new ideas on its own. It’s out of our control. That is what makes it different from any previous revolution.”

“Current AI is an amoeba, but it will develop millions of times faster than we did” One of the most common fears among journalists at the press conference was AI’s ability to construct narratives like humans. Harari commented on this topic: “The latest developments in AI show its ability to create stories. There used to be AI to monitor and highlight what caught attention, but it didn’t write, make music, or generate images. Now it can. I know people say those texts aren’t very good, the scores are of poor quality, the videos and images have errors, like hands with six fingers, but we must understand that we are at the early stages of a technology that is only ten years old. We haven’t seen anything yet. If we draw a parallel with biological evolution, current AI is an amoeba, but it will develop millions of times faster than we did. AI will go from amoeba to dinosaur in just ten or twenty years. The texts from ChatGPT have errors, but they are paragraphs, texts, essays that make sense. This is something that many humans struggle with. I’m a university professor and many students struggle to write a coherent essay linking various arguments. AI already does it. The cultural artifacts of the coming years will be of an alien intelligence. What reaction will this provoke in human society? No one knows, and that is the big question.”

A THREAD OF HOPE

The final part of Yuval Noah Harari’s appearance was marked by a question about the possibility of some light in this situation. Can AI contribute something positive to humanity? His answer was clear: “Without a doubt. AI has enormous potential. And I don’t think all the people in Silicon Valley are evil. AI can provide us with the best healthcare in the coming years. There is a shortage of doctors around the world, and AI can offer a solution: monitoring us 24 hours a day, checking our blood pressure, having all our biological information… And all of this will be much cheaper, and it will be for everyone, even the poorest people and those living in remote areas.” He then gave another example of AI’s beneficial applications: “Every year there are over a million deaths from traffic accidents, most caused by human error, many from people drinking alcohol while driving. If you give AI control over traffic with autonomous vehicles, it can save a million lives: AI won’t fall asleep at the wheel or drink alcohol.” The Israeli thinker admitted at that moment in the speech that he doesn’t talk much about the positive aspects of AI, although he expressed that there are sections of Nexus where that beneficial potential is exposed, but, as he stated, there is a reason for focusing almost exclusively on the dangers: “There are very rich companies that flood the media and platforms with very positive messages about what AI will do and tend to ignore the dangers. The job of philosophers, academics, and thinkers is to focus on the dark side, although that doesn’t mean there are only dangers. We shouldn’t stop this evolution; what we’re saying is that we need to invest more in safety. It’s about applying common sense, just like in any other industry. The problem with people in the AI sector is that they are caught in a mentality of an arms race: they don’t want anyone to beat them in the pursuit of advancements. And this is very dangerous.”

“Now philosophy can begin to debate very practical issues”

And at this point, what can save us? Yuval Noah Harari is clear: “Philosophy.” “For thousands of years, philosophers have debated theoretical questions with little impact on society. Few people act according to a theoretical philosophy, we function more with emotions than with intellectual ideas. Now philosophy can begin to debate very practical issues. I’ll give an example: What should the algorithm do if an autonomous vehicle is about to run over two children, and the only way to avoid the accident is to sacrifice the car’s owner who is sleeping in the back? This is a practical question and we need to tell AI what to do. And this question is not just for engineers and mathematicians, it’s also for philosophers. All of this connects with important human concepts like free will, the meaning of life, and the need to recognize different forms of AI as lives with rights and an ethical category in our societies.”

Since we left the movie theater, after taking the VHS tape out of the video player, and turning off the TV with the remote, many of us felt the same sensation: we all wanted to live what the replicant Roy Batty lived. Perhaps the long-awaited moment has come to see attack ships on fire beyond Orion’s shoulder, and C-beams glittering in the dark near the Tannhäuser Gate. AI is already in our lives, it is an irreversible process, and if, as Yuval Noah Harari warns, we do nothing to regulate it, to control it under the umbrella of philosophy, we will be the ones lost in time, we humans will become those tears in the rain.


Interview and article by Miguel Ángel Santamarina, from El Bar de Zenda

Yuval Noah Harari is a historian, philosopher, and author of Sapiens, Homo Deus, and the children’s series Unstoppable Us. He is a professor in the history department at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Yuval Noah Harari argues that AI has hacked the operating system of human civilization.

Yuval Noah Harari argues that AI has hacked the operating system of human civilization.

The Computers that Tell Stories Will Change the Course of Human History, Says Historian and Philosopher

The ears of ARTIFICIAL INTELLIGENCE (AI) have haunted humanity since the very beginning of the computer age. Until now, these fears focused on machines using physical means to kill, enslave, or replace people. But in recent years, new AI tools have emerged that threaten the survival of human civilization from an unexpected direction. AI has acquired some remarkable abilities to manipulate and generate language, whether through words, sounds, or images. In this way, AI has hacked the operating system of our civilization.

Language is the material from which almost all human culture is made. Human rights, for example, are not inscribed in our DNA. They are, rather, cultural artifacts that we create by telling stories and writing laws. Gods are not physical realities. They are, rather, cultural artifacts that we create by inventing myths and writing scriptures.

Money is also a cultural artifact. Banknotes are just pieces of colored paper, and today, more than 90% of money isn’t even in the form of banknotes, but rather just digital information stored on computers. What gives money value are the stories told about it by bankers, finance ministers, and cryptocurrency gurus. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff weren’t particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think of ChatGPT and other new AI tools, they are often drawn to examples like schoolchildren using AI to write their essays. What will happen to the school system when children do that? But this type of question overlooks the big picture. Forget about school essays. Think about the next U.S. presidential race in 2024 and try to imagine the impact of AI tools that could be used to mass-produce political content, fake news, and writings for new cults.

In recent years, the QAnon cult has gathered around anonymous online messages known as “Q drops.” Followers would collect, revere, and interpret these Q drops as a sacred text. While, as far as we know, all previous Q drops were composed by humans and bots simply helped spread them, in the future we could see the first cults in history whose revered texts were written by a non-human intelligence. Throughout history, religions have claimed a non-human source for their sacred books. Soon, that could become a reality.

On a more prosaic level, we could soon find ourselves having long online discussions about abortion, climate change, or the Russian invasion of Ukraine with entities we believe to be human, but which are actually AI. The problem is that it makes no sense for us to waste time trying to change the stated opinions of an AI robot, when AI could perfect its messages with such precision that it would have a strong chance of influencing us.

Thanks to its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews. While there is no evidence that AI has consciousness or feelings of its own, to foster a false intimacy with humans, it’s enough for AI to make them feel emotionally attached to it. In June 2022, Blake Lemoine, a Google engineer, publicly claimed that the LaMDA AI chatbot he was working on had become conscious. The controversial statement cost him his job. The most interesting part of this episode wasn’t Lemoine’s claim, which was likely false, but his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most effective weapon, and AI has just acquired the ability to mass-produce intimate relationships with millions of people. We all know that, in the past decade, social media has become a battleground for controlling human attention. With the new generation of AI, the battleground is shifting from attention to intimacy. What will happen to human society and psychology when AI fights against AI in a battle to simulate intimate relationships with us, which could then be used to persuade us to vote for certain politicians or buy specific products?

Even without creating “false intimacy,” the new AI tools would have an immense influence on our opinions and worldviews. People might come to use a single AI advisor as an all-knowing oracle. No wonder Google is terrified. Why bother searching when I can simply ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can simply ask the oracle to tell me the latest news? And what purpose do ads serve, when I can simply ask the oracle to tell me what to buy?

And these scenarios don’t even fully reflect the big picture. What we are talking about is the potential end of human history. Not the end of history, but the end of the part dominated by humans. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations, such as religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when AI takes over culture and begins to produce stories, melodies, laws, and religions? Previous tools, such as the printing press and radio, helped spread human cultural ideas, but they never created entirely new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, a completely new culture.

At first, AI will probably imitate the human prototypes it was trained on in its early stages, but with each passing year, AI culture will dare to go where no human has gone before. For millennia, humans have lived within the dreams of other humans. In the coming decades, we might find ourselves living within the dreams of an extraterrestrial intelligence.

The Fear of AI Has Haunted Humanity for Decades, But for Thousands of Years, Humans Have Been Haunted by a Much Deeper Fear

For millennia, humans have feared being trapped in a world of illusions. We have always appreciated the power of stories and images to manipulate our minds and create illusions. Consequently, since ancient times, humans have feared being caught in such a world.

In the 17th century, René Descartes feared that perhaps an evil demon was trapping him in a world of illusions, creating everything he saw and heard. In ancient Greece, Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave for their whole lives, facing a blank wall. A screen. On that screen, they see various shadows projected. The prisoners confuse the illusions they see there with reality.

In ancient India, Buddhist and Hindu sages pointed out that all human beings lived trapped in Maya, the world of illusions. What we normally consider reality is often nothing more than fiction in our own minds. People can wage entire wars, kill others, and be willing to die themselves due to their belief in this or that illusion.

The revolution of artificial intelligence is bringing us face to face with Descartes’ demon, Plato’s cave, and the Maya. If we are not careful, we may get trapped behind a curtain of illusions that we cannot tear down or don’t even realize is there.

Of course, the new power of AI could also be used for beneficial purposes. I won’t dwell on this topic because those developing AI talk about it enough. The job of historians and philosophers like me is to point out the dangers. But undoubtedly, AI can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure that the new AI tools are used for good and not for evil. To do so, we must first appreciate the true capabilities of these tools.

Since 1945, we’ve known that nuclear technology can generate cheap energy for the benefit of humanity but can also physically destroy human civilization. That’s why we have restructured the entire international order to protect humanity and ensure that nuclear technology is mainly used for good. Now we must face a new weapon of mass destruction that could annihilate our mental and social world.

We can still regulate the new AI tools, but we must act quickly. While nuclear weapons cannot invent more powerful nuclear weapons, AI can create exponentially more powerful AI. The crucial first step is to demand strict safety controls before powerful AI tools are released into the public domain. Just as a pharmaceutical company cannot launch new drugs before testing their short- and long-term side effects, tech companies should not release new AI tools until they are proven to be safe. We need an equivalent of the Food and Drug Administration for new technologies, and we need it yesterday.

Won’t slowing down the public rollout of AI make democracies fall behind more ruthless authoritarian regimes? Quite the opposite. Uncontrolled AI deployment would create social chaos that would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations depend on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thus destroying democracy.

We have just encountered an extraterrestrial intelligence here on Earth. We don’t know much about it, except that it could destroy our civilization. We should end the irresponsible deployment of AI tools in the public sphere and regulate AI before it regulates us. And the first regulation I would suggest is to make it mandatory for AI to disclose that it is AI. If I am having a conversation with someone and cannot distinguish whether it is a human or AI, that is the end of democracy.

This text was generated by a human.


Sources

Yuval Noah Harari is a historian, philosopher, and author of “Sapiens”, “Homo Deus”, and the children’s series “Unstoppable Us.” He is a professor in the Department of History at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Artificial Intelligence – THE ECONOMIST. Original in English.

Translated by the Future Lab Team.

error: Content is protected !!