The topic of Artificial Intelligence, with its vast reach and how little we actually know – in many cases, we are still in an intuitive stage – has led to an authentic avalanche of studies, opinions, controversies, and heated debates that practically occur daily.
Our Laboratory understands that one of the best services it can provide to all those people and organizations who follow our work is to offer a carefully selected series of opinions, positions, and debates, almost in real-time, to genuinely keep informed those who are interested in what is happening and our perspective.
By the way, the Laboratory is working on its Artificial Intelligence Microlab and will share its conclusions and perceptions in due time. However, the urgency of the issue does not allow for too many delays. This is why today we are launching a Series on Artificial Intelligence, which we hope will be a catalyst for analysis, reflection, and conclusions on the projections that such an important topic forces us to address. No one—neither governments, nor international organizations, nor regional bodies, think tanks, or individuals—can remain indifferent to its evolution.
As always, we hope our service will be useful to you.
Yuval Noah Harari argues that AI has hacked the operating system of human civilization.
Computers that tell stories will change the course of human history, says the historian and philosopher.
Stay calm with prophecies and speculations:
Within our series on Artificial Intelligence, today we have decided to analyze the latest statements and articles by Yuval Harari. As is well known, Dr. Harari is a historian, philosopher, and author of “Sapiens,” “Homo Deus,” and the children’s series “Unstoppable Us.” He is a professor at the Department of History at the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company. He is generally regarded as an important authority on the interpretation of the future, from a sociological and philosophical point of view.
Harari has said in The Guardian that “AI has hacked the operating system of human civilization.” This is a very strong statement that undoubtedly grabs attention and may justify merely warning about a possibility that no one knows whether it will happen. With absolute frankness and respect for Dr. Harari, in some cases, this can be taken as a reckless statement, which may be grounded in the prestige of an important academic career, but it justifies certain statements that sound somewhat apocalyptic. We do not know the probability of it happening. Sometimes, as with Harari here, it is also incomprehensible. But saying it in 2023 is striking and serves, just like viral TikToks that exaggerate, to hack our attention, which is what gets hacked the quickest.
It is logical to think that AI can bring serious problems, but there are no certainties. If the “godfather of AI” (Geoffrey Hinton, 2018 Turing Award winner, whom we will analyze in another article in this series) was wrong in his prediction six years ago, he can be wrong now. Harari also included in an article in March 2023 in The New York Times that half of “AI researchers” believe there is a 10% chance that AI will kill us all. But that’s not true. That article has placed him in the discussion and projected him to other media and networks, ensuring “charts” to maintain the position of a super-expert. That should, in any case, call every commentator to prudence, because what is true is that we don’t know.
What is certain is that we must all be very cautious on this issue. Just as we have said that we must be careful with Altman, we must also be cautious with other potential prophets, and focus on what is actually happening and, when we are unsure – and possibly very unsure – regulate the advancement of certain facts that we can have some certainties about, which can potentially be dangerous.
Harari’s ideas on AI hacking civilization:
According to Harari, the issues surrounding artificial intelligence have pursued humanity since the beginning of the computer era. Until now, these fears have focused on machines using physical means to kill, enslave, or replace people. However, in recent years, new artificial intelligence tools have emerged that seem to threaten the survival of human civilization from an unexpected direction. It is true that Artificial Intelligence seems to have acquired some remarkable abilities to manipulate and generate language, whether through words, sounds, or images. From there, Harari argues that Artificial Intelligence has thus hacked the operating system of our civilization. Maybe, or maybe not, let’s proceed with caution. It is true that manifestations have appeared that we do not understand and even certain biases that we have yet to explain, but from that to claim that we are in the hands of HAL, the wonderful machine from 2001: A Space Odyssey, there is quite a gap.
The point is not to rebut Dr. Harari or Dr. Hinton. The goal is to insert moderation into a debate that is heating up too much, where there are somewhat reckless statements, where there are many things we still don’t know and should proceed with caution. But one thing is this, and another thing is to predict that Artificial Intelligence is about to take over the control of human civilization. In the end, some of us still have quite a bit of trust in human intelligence, at least for now, even despite those who seem to be walking a complicated path regarding technological development.
Harari’s argument is very interesting: he tells us that language is the material from which almost all human culture is made. Human rights, for example, are not inscribed in our DNA. Rather, they are cultural artifacts that we create by telling stories and writing laws. Gods are not physical realities. Rather, they are cultural artifacts that we create by inventing myths and writing scriptures.
Money is also a cultural artifact. Banknotes are just pieces of colored paper, and today more than 90% of money isn’t even banknotes; it’s just digital information in computers. What gives money its value are the stories that bankers, finance ministers, and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers. Certainly, we can add to these comments that money, which is a commodity, has its value residing in a purely matter of faith.
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think of ChatGPT and other new Artificial Intelligence tools, they are often drawn to examples like schoolchildren using it to write their essays (by the way, there are people who have a notorious tendency to apply the law of least effort; now, depending on the local phrase “when the law is made, the trick is made,” we know that every action has a reaction). With that, what will happen to the school system when children do this? But this type of question misses the bigger picture. Forget the school essays. Think about the next U.S. Presidential race in 2024 and try to imagine the impact of AI tools that can be used to produce political content in bulk, fake news, and scriptures for new cults. All these statements placed at the end of the sentence are elements for reflection. Certainly, influence—and we have already seen it in the data manipulations of Facebook, for example—is possible, especially when human intelligence is tempted to rest on somewhat unreliable substitutes.
Cults created by Artificial Intelligence?
In recent years, the QAnon cult has merged around anonymous online messages known as “Q drops.” Followers collected, venerated, and interpreted these Q drops as a sacred text. While, as far as we know, all previous Q drops were composed by humans, and bots simply helped spread them, in the future we could see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their sacred books. Soon, that could become a reality. It is certainly a possibility. And a disturbing one.
On a more prosaic level, we may soon find ourselves engaging in long online discussions about abortion, climate change, or the Russian invasion of Ukraine with entities we believe to be human, but that are actually artificial intelligence. The problem is that it makes no sense for us to waste time trying to change the declared opinions of an AI bot, when it could perfect its messages so precisely that it has a good chance of influencing us.
Through its mastery of language, AI could even form intimate relationships (we have evidence this has happened, in isolated acts, but it has happened) with people and use the power of intimacy to change our opinions and worldviews. Although there are no signs, for now (these are things we need to work on seriously) that AI has its own consciousness or feelings, to foster false intimacy with humans, it is enough if it can make them feel emotionally attached to it. In June 2022, Blake Lemoine, a Google engineer, publicly claimed that the chatbot he was working on had become sentient. The controversial claim cost him his job. The most interesting thing about this episode wasn’t Mr. Lemoine’s claim, which was probably dubious or at least influenced by an emotional perception that lacked rational weight to justify the claim. Now, if AI can influence people to risk their jobs for it, what else could it induce them to do?
A battle to fight:
Harari says that we are facing a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to produce intimate relationships en masse with millions of people. We all know that, during the last decade, social media has become a battleground for controlling human attention, and that, by the way, it has become the main place where people, in this particular evolution of human relationships, seek their affectionate relationships. And here, I strongly agree with Harari, with the new generation of artificial intelligence, the battleground is shifting from attention to intimacy. What will happen—then—with human society and psychology when AI competes with emotional intelligence in a battle to fake intimate relationships with us, which can later be used to convince us to vote for certain politicians or buy certain products? And even something that could be much more complex, which is the delicacy of relationships between humans. By the way, this needs to be researched and done with the utmost emphasis.
Even without creating “false intimacy,” the new tools of artificial intelligence would have an immense influence on our opinions and worldviews. People might end up using a single AI advisor as an omniscient one-stop oracle (there are too many people waiting for this, and this is not a gratuitous statement, as we can observe it every day on social media, particularly on some of them. It is not necessary to individualize them at this moment. That moment will come). It is no wonder that some of Google’s top executives are terrified (though they don’t want to miss out on the massive business behind all of this). Why bother searching when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can simply ask the oracle to tell me the latest news? And what’s the purpose of ads when I can just ask the oracle what to buy? Let’s see, Amazon and others figured this out long ago, it’s just about enhancing it.
Harari, the end of human history, and the need to move cautiously with certain claims:
Even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. It’s not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex. We are here treading on dangerous ground, and this is the point where I estimate Harari wanted to go from the beginning. The shift from human control to AI control, and, why not ask it?, what human control could be behind AI…? By the way, this alone could give rise to a Treatise.
Harari continues, assuming that this inversion of control by intelligences indeed takes place: What will happen to the course of history when artificial intelligence takes over culture and begins to produce stories, melodies, laws, and religions? The previous tools, like the printing press and the radio, helped spread human cultural ideas, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, an entirely new culture.
At first, AI will likely imitate the human prototypes it was trained on during its infancy. But with each passing year, the culture of this type of intelligence will boldly go where no human has gone before. For millennia, humans have lived within the dreams of other humans. In the coming decades, we could find ourselves living within the dreams of an alien intelligence. By the way, the reference to “alien” here doesn’t align with common concepts—we are not talking about extraterrestrial beings, but non-human intelligences even though they have been paradoxically created by humans.
What we’ve seen so far is of great interest. What follows, closely related to dreams and storytelling, is Harari’s take. I don’t fully agree. It’s about subordinating the human mind to myths and dreams, and that is a matter worth discussing at length. Harari states that the fear of artificial intelligence has only haunted humanity for the last few decades (and we would be assuming that a broad segment of humanity knows and fears AI, which frankly seems artificial to me). But for thousands of years, humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and create illusions. Consequently, humans have feared being trapped in a world of illusions since ancient times (the fear of dreams? When dreams have been the driving force behind human advancement. Frankly, this seems excessive… with all due respect to the constructs and thinking of Master Harari).
In the 17th century, René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece, Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave throughout their lives, facing a blank wall. A screen. On this screen, various shadows are projected. The prisoners confuse the illusions they see there with reality. Undoubtedly, but we are referring to Masters, the man and woman of the people, who fight daily for bread, a matter that hasn’t changed for centuries, which is somewhat distant from these philosophical deliberations.
Harari continues, building on his theory: In ancient India, Buddhist and Hindu sages pointed out that all humans lived trapped within Maya, the world of illusions. What we normally take as reality is often just fiction in our own minds. People can undertake entire wars, killing others and wishing to be killed themselves, due to their belief in this or that illusion. Another element we can discuss at length.
Some good news seems to arrive:
Of course, the new power of artificial intelligence could also be used for good purposes. I won’t dwell on this because the people developing it talk quite a bit about it. The work of historians and philosophers like me is to point out the dangers. But certainly, AI can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure these new tools are used for good and not for evil. To do that, we must first appreciate the true capabilities of these tools.
Since 1945, we’ve known that nuclear technology could generate cheap energy for the benefit of humans, but it could also physically destroy human civilization. Therefore, we reshaped the entire international order to protect humanity and ensure that nuclear technology was used primarily for good. Now we have to deal with a new weapon of mass destruction that could annihilate our mental and social world.
We can still regulate the new AI tools, but we must act quickly. While nuclear weapons cannot invent more powerful nuclear weapons, artificial intelligence can create exponentially more powerful intelligence. The first crucial step is to demand rigorous safety controls before these powerful tools are released to the public domain. Just as a pharmaceutical company cannot release new drugs before testing their short- and long-term side effects, tech companies should not release new AI tools before they are safe. We need an equivalent of the Food and Drug Administration for new technologies, and we need it yesterday. Which, by the way, is a completely shareable and combative idea in the sense of advocating for it.
Harari’s next question is more than interesting. Will the slowdown in the public deployment of AI make democracies fall behind more ruthless authoritarian regimes? Quite the opposite. Unregulated deployments of that technology would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations are based on language. When technology hacks language, it could destroy our ability to have meaningful conversations, thus destroying democracy. In this sense—and not attributing it to Harari—the clearest case is that of the People’s Republic of China.
Finally, Harari ends with an apocalyptic but very plausible statement. We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it could destroy our civilization. We should put an end to the irresponsible deployment of AI tools in the public sphere and regulate them before they regulate us. And the first regulation I would suggest is to make it mandatory for AI to reveal that it is AI. “If I’m having a conversation with someone and I can’t tell if they’re a human or an artificial intelligence, that’s the end of democracy.”
The QAnon conspiracy theory has been a far-reaching and controversial phenomenon, particularly in the United States, although it has also gained followers in other countries such as Japan, the United Kingdom, and Germany. The theory is based on the idea that there is a “deep state” of powerful actors who manipulate institutions and are involved in criminal activities, such as the sexual trafficking of minors, and that Donald Trump was fighting to expose and stop them. QAnon followers believe that Trump is investigating figures like Hillary Clinton, Barack Obama, and George Soros, whom they consider the leaders of this conspiracy.
One of the distinctive features of QAnon is its decentralized organization: it started with an anonymous post from someone using the pseudonym “Q,” who claimed to have access to high-level classified information about the Trump administration. This person or group of people posted messages on internet forums, which fostered the creation of an online community that interpreted and spread these posts, known as “Q drops.”
The movement has been largely driven by social media and online platforms, which have served to promote the QAnon messages and organize its followers. This influence was reflected in the appearance of QAnon supporters at political rallies and the active promotion of the theory on social networks. Despite the movement’s great visibility, it is difficult to determine the exact number of its followers, as QAnon activity mainly takes place on digital platforms, including the more obscure ones like 8kun (formerly known as 8chan).
Furthermore, the theory has been linked to violence and acts of domestic terrorism, which led the FBI to classify it as a potential source of danger. It has also been documented that political figures, including members of the Trump administration, have amplified QAnon messages, sometimes without realizing the harm this could cause.
Major tech platforms like Twitter and Facebook have taken steps to limit the spread of the theory, blocking accounts and restricting content. However, QAnon followers have migrated to other, less monitored platforms to continue spreading their ideas.
The slogan “Where we go one, we go all” (#WWG1WGA) has been adopted by followers as a symbol of unity and purpose. This slogan, originally from the movie White Squall, reflects the strong sense of camaraderie among community members, many of whom see themselves as digital soldiers fighting against what they perceive as a greater evil.
The global reach and digital nature of QAnon have made it a phenomenon that is difficult to control, with significant implications for politics, security, and social cohesion. However, its conspiratorial nature and lack of verifiable evidence have led many to consider it a dangerous breeding ground for misinformation and extremism.
It is a topic that invites reflection on how ideas can shape reality and how tech platforms influence the spread of extreme beliefs.
0 Comments