Yuval Noah Harari: “Most of the information is not true, it’s garbage.”

Author: Yuval Noah Harari

Yuval Noah Harari is a historian, philosopher and author of “Sapiens”, “Homo Deus” and the children's series “Unstoppable Us”. He is a professor in the history department at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Artificial intelligence | Globalization | States and technology

June 15, 2024

15 Jun, 2024

Was Deckard a replicant? That’s the great debate surrounding Blade Runner. The protagonist of the film, inspired by Philip K. Dick’s novel Do Androids Dream of Electric Sheep? (1968), keeps the mystery surrounding his character, a debate that brings us to an important question decades later: What makes us human? According to Omar Hatamleh, Director of Artificial Intelligence at NASA’s Goddard Space Flight Center, in 50 years it will be nearly impossible to distinguish a humanoid robot from a person. The Voight-Kampff test, designed to distinguish between humans and replicants, will be useless to Harrison Ford. The threat of AI is so great that in March 2023, Yuval Noah Harari, along with Elon Musk and Apple co-founder Steve Wozniak, signed a manifesto calling for a six-month halt to the “out-of-control race” of ChatGPT. A year and a half has passed, and not only has it not been stopped, but AI has hit the accelerator pedal to the floor. Are we in time to do something to prevent AI from destroying us, or is it a lost battle? Who can kill the algorithm? In his new book, Nexus (Debate), philosopher Yuval Noah Harari — the author of Sapiens, a work that has sold 25 million copies since its 2013 publication — recounts how different societies and political systems have used information to achieve their goals and impose order, for good and for ill. Harari focuses his work on the crucial moment we face today, when non-human intelligence threatens our very existence.

EVERYONE TALKS, NO ONE LISTENS: TOTALITARIANISM, LIES, AND CONSPIRACY THEORIES

In Nexus, Yuval Noah Harari analyzes the role of information networks from the Stone Age to the rise of AI. At the beginning of his intervention, the historian pointed out the contradiction in which our society lives: “We have the most important information technology in history, but people seem unable to talk.” In this regard, the author of Homo Deus: A Brief History of Tomorrow used the example of what is happening in the United States, the impossibility of Democrats and Republicans agreeing on anything. This ideological clash is evident in social media, and Harari commented on this issue, saying: “The easiest way to capture attention is to press the button of hatred or fear and deliberately spread conspiracy theories.” He also stated, “The big tech giants promised us that they would connect everyone and spread the truth. We have very sophisticated technology to communicate, but we do not converse.” For Harari, the essential difference between democracy and a totalitarian regime is “conversation,” which is currently in grave danger due to the crisis of journalism and the spread of “lies and conspiracy theories” on these social networks.

In connection with the above, Harari discussed how wrong it is to equate information with truth: “Information is not knowledge. The naive view that dominates places like Silicon Valley holds that the more information, the more people know. But most of the information in the world is trash. It is not true. True information is scarce because writing an authentic report requires time, money, and effort. Writing a lie, false information is simple; you don’t need to invest anything. Fiction is cheap, and truth is usually complicated. Most people prefer simple stories. If we want the truth to prevail, we have to invest in it. We need newspapers, societies, and academic institutions. This is the great responsibility of current societies: to resist that naive view, like the one from people like Elon Musk, that if there is more information, people will have more chances of knowing the truth.”

The Israeli historian warned during the press conference about the “totalitarian potential” of artificial intelligence and provided several examples, such as his own country, Israel, “which is building a total surveillance regime” in the occupied territories. According to the author, AI’s “immense capacity” to collect and analyze information will make possible “the complete surveillance regime that eliminates privacy,” something that Hitler and Stalin did not achieve but that countries like Iran are now beginning to implement, with facial recognition cameras that monitor and punish women’s failure to wear a veil when traveling in a vehicle. These video surveillance devices locate women driving without a veil, identify them, their name, their phone number, and immediately send an SMS informing them of the infraction and the punishment: they must leave their car, which has been confiscated from that moment on. This is not a science fiction scenario. It is happening now. Although he pointed out the dangers that link AI and totalitarianism, Yuval Noah Harari made an important clarification by noting that this is not deterministic: societies can make decisions to reverse it.

THE RESPONSIBILITY OF BIG CORPORATIONS

Considering that the AI revolution is in its early stages, Harari emphasized the need to ask questions. “It has enormous positive potential. If I speak little about it, it’s because there are large, extremely rich, and powerful corporations that are already flooding us with positive messages and ignoring the dangers,” he said. For this reason, he sees it as crucial to generate “a debate about the responsibility of media giants” like Facebook, Twitter, Instagram, or TikTok. “Corporations must be held responsible for what their algorithms decide, just as the editor of the New York Times is responsible for its front page,” he pointed out. Another important issue being debated right now in countries like Spain is how to distinguish between censorship and the mechanisms of platforms to prevent the spread of lies and hoaxes. The philosopher commented, “It’s important that people know how to see the difference between human editors and corporate algorithms. People have the right to stupidity, to tell lies, except in extreme cases defined by the law; it’s part of freedom of speech,” he said. “The problem is not the humans but the algorithms of corporations whose business model is based on engagement, which means that their priority is for people to spend as much time as possible on their platforms to sell ads and gather data they can sell to third parties. We must be very careful about censoring human users,” he concluded.

THE END OF THE WORLD WE HAVE KNOWN

Although the book takes us through history from antiquity to the present day, much of the presentation focused on AI and its impact on our societies. Yuval Noah Harari explained what makes Artificial Intelligence different from any other technology we have known: “AI is different because it is not a tool, it is an agent, an independent agent. Atomic weapons have great power, but that power is in the hands of human beings. The bomb itself cannot develop any military strategy. AI is different; it can make decisions on its own.” Harari gave the example of what is happening in the media and social networks with the rise of AI: “In a newspaper, the most important decisions are made by the editor, he is the one who decides what goes on the front page, what will be the cover. Now, in some of the world’s largest platforms, like Facebook and Twitter, the role of the editor has been replaced by AI. It is the algorithms that decide which story is recommended and which one is placed at the top of the news feed. And AI can also come up with new ideas on its own. It’s out of our control. That is what makes it different from any previous revolution.”

“Current AI is an amoeba, but it will develop millions of times faster than we did” One of the most common fears among journalists at the press conference was AI’s ability to construct narratives like humans. Harari commented on this topic: “The latest developments in AI show its ability to create stories. There used to be AI to monitor and highlight what caught attention, but it didn’t write, make music, or generate images. Now it can. I know people say those texts aren’t very good, the scores are of poor quality, the videos and images have errors, like hands with six fingers, but we must understand that we are at the early stages of a technology that is only ten years old. We haven’t seen anything yet. If we draw a parallel with biological evolution, current AI is an amoeba, but it will develop millions of times faster than we did. AI will go from amoeba to dinosaur in just ten or twenty years. The texts from ChatGPT have errors, but they are paragraphs, texts, essays that make sense. This is something that many humans struggle with. I’m a university professor and many students struggle to write a coherent essay linking various arguments. AI already does it. The cultural artifacts of the coming years will be of an alien intelligence. What reaction will this provoke in human society? No one knows, and that is the big question.”

A THREAD OF HOPE

The final part of Yuval Noah Harari’s appearance was marked by a question about the possibility of some light in this situation. Can AI contribute something positive to humanity? His answer was clear: “Without a doubt. AI has enormous potential. And I don’t think all the people in Silicon Valley are evil. AI can provide us with the best healthcare in the coming years. There is a shortage of doctors around the world, and AI can offer a solution: monitoring us 24 hours a day, checking our blood pressure, having all our biological information… And all of this will be much cheaper, and it will be for everyone, even the poorest people and those living in remote areas.” He then gave another example of AI’s beneficial applications: “Every year there are over a million deaths from traffic accidents, most caused by human error, many from people drinking alcohol while driving. If you give AI control over traffic with autonomous vehicles, it can save a million lives: AI won’t fall asleep at the wheel or drink alcohol.” The Israeli thinker admitted at that moment in the speech that he doesn’t talk much about the positive aspects of AI, although he expressed that there are sections of Nexus where that beneficial potential is exposed, but, as he stated, there is a reason for focusing almost exclusively on the dangers: “There are very rich companies that flood the media and platforms with very positive messages about what AI will do and tend to ignore the dangers. The job of philosophers, academics, and thinkers is to focus on the dark side, although that doesn’t mean there are only dangers. We shouldn’t stop this evolution; what we’re saying is that we need to invest more in safety. It’s about applying common sense, just like in any other industry. The problem with people in the AI sector is that they are caught in a mentality of an arms race: they don’t want anyone to beat them in the pursuit of advancements. And this is very dangerous.”

“Now philosophy can begin to debate very practical issues”

And at this point, what can save us? Yuval Noah Harari is clear: “Philosophy.” “For thousands of years, philosophers have debated theoretical questions with little impact on society. Few people act according to a theoretical philosophy, we function more with emotions than with intellectual ideas. Now philosophy can begin to debate very practical issues. I’ll give an example: What should the algorithm do if an autonomous vehicle is about to run over two children, and the only way to avoid the accident is to sacrifice the car’s owner who is sleeping in the back? This is a practical question and we need to tell AI what to do. And this question is not just for engineers and mathematicians, it’s also for philosophers. All of this connects with important human concepts like free will, the meaning of life, and the need to recognize different forms of AI as lives with rights and an ethical category in our societies.”

Since we left the movie theater, after taking the VHS tape out of the video player, and turning off the TV with the remote, many of us felt the same sensation: we all wanted to live what the replicant Roy Batty lived. Perhaps the long-awaited moment has come to see attack ships on fire beyond Orion’s shoulder, and C-beams glittering in the dark near the Tannhäuser Gate. AI is already in our lives, it is an irreversible process, and if, as Yuval Noah Harari warns, we do nothing to regulate it, to control it under the umbrella of philosophy, we will be the ones lost in time, we humans will become those tears in the rain.


Interview and article by Miguel Ángel Santamarina, from El Bar de Zenda

Yuval Noah Harari is a historian, philosopher, and author of Sapiens, Homo Deus, and the children’s series Unstoppable Us. He is a professor in the history department at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Autor: Yuval Noah Harari

Autor: Yuval Noah Harari

Yuval Noah Harari is a historian, philosopher and author of “Sapiens”, “Homo Deus” and the children's series “Unstoppable Us”. He is a professor in the history department at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!