Yuval Noah Harari, historian and technological thinker:”AI is an unprecedented threat to humanity”

Yuval Noah Harari, historian and technological thinker:”AI is an unprecedented threat to humanity”

Since Yuval Noah Harari published Sapiens: From Animals to Gods, he has become one of those brilliant minds worth following closely. This is largely due to his ability to link the past with the present without necessarily resorting to teleological arguments. Although he is a historian by training, it is also worth keeping an eye on his works that are not related to history, but rather to current opinion.

Technology:
In this realm, Harari is quite prone to publishing opinion articles, as well as granting interesting interviews. On this occasion, it is time to revisit one of his articles in The Guardian in which he warns about the existential dangers of AI, a subject that is also part of one of his latest books, Nexus. Although it is true that some experts argue it is not a danger at all.
For Harari, AI is an absolute existential risk to humankind, potentially questioning our ability to continue developing.

An ‘existential’ risk to consider:
Artificial intelligence poses an unprecedented threat to humanity, as it is the first technology in history capable of making decisions and generating new ideas on its own. All previous human inventions have served to enhance human beings, as regardless of how powerful the new tool was, decisions about its use have always remained in our hands. Nuclear bombs do not decide on their own whom to kill, nor can they be improved or invent even more powerful bombs on their own – Yuval Noah Harari
Harari’s stance is not surprising, as we have read countless times that AI can be a real problem for humans rather than just a tool. Thus, he is concerned about the possibility of AI making decisions on its own and being able to create something new. This is something that has never been possible with any of our inventions as a species, because it has always been a human decision how to use them. The problem is that AI is the one deciding.

Sam Altman, founder of OpenAI, on a possible end of the world: “No bunker will save you if AI goes out of control”

Sam Altman, founder of OpenAI, on a possible end of the world: “No bunker will save you if AI goes out of control”

For a while, a vision of the destructive potential of Artificial Intelligence has been offered. Sam Altman has been clear about this: AI can be problematic, but also incredibly constructive

Sam Altman is undoubtedly one of the key figures of this decade. Since the global emergence of ChatGPT in the market, things have only accelerated year after year with the unrelenting success of Artificial Intelligence (AI) models. However, it has also sparked a new fear, specifically of an apocalypse caused by AI, which is why many wealthy individuals are buying their own bunkers.

Technology: Now, with the Cold War-like rivalry between the United States and China, the fear is even greater, especially after the emergence of a Chinese AI like DeepSeek in the global market. Now that Sam Altman is in the spotlight due to his confrontations with Elon Musk, who wants to buy OpenAI, it’s time to revisit an interview he gave in 2023, at the dawn of AI, where he explained the dangers of general artificial intelligence (AGI) and how no bunker could save humanity if it slipped out of human control.

A malignant AGI and the end of humanity: our transformative potential and intelligence. This is not strange, but we have never had to deal with a species more intelligent than us. The goal of general artificial intelligence is for it to be, which generates some fears. Humans have become the dominant species largely because of this.

In this way, Altman clarified in 2023 his vision about the destructive potential of AI, although he wants to avoid this situation and continue developing AI in an efficient and safe way for all. He recently stated that he had been on the wrong side of history.

error: Content is protected !!