
Yuval Noah Harari, historian and technological thinker:”AI is an unprecedented threat to humanity”
Since Yuval Noah Harari published Sapiens: From Animals to Gods, he has become one of those brilliant minds worth following closely. This is largely due to his ability to link the past with the present without necessarily resorting to teleological arguments. Although he is a historian by training, it is also worth keeping an eye on his works that are not related to history, but rather to current opinion.
Technology:
In this realm, Harari is quite prone to publishing opinion articles, as well as granting interesting interviews. On this occasion, it is time to revisit one of his articles in The Guardian in which he warns about the existential dangers of AI, a subject that is also part of one of his latest books, Nexus. Although it is true that some experts argue it is not a danger at all.
For Harari, AI is an absolute existential risk to humankind, potentially questioning our ability to continue developing.
An ‘existential’ risk to consider:
Artificial intelligence poses an unprecedented threat to humanity, as it is the first technology in history capable of making decisions and generating new ideas on its own. All previous human inventions have served to enhance human beings, as regardless of how powerful the new tool was, decisions about its use have always remained in our hands. Nuclear bombs do not decide on their own whom to kill, nor can they be improved or invent even more powerful bombs on their own – Yuval Noah Harari
Harari’s stance is not surprising, as we have read countless times that AI can be a real problem for humans rather than just a tool. Thus, he is concerned about the possibility of AI making decisions on its own and being able to create something new. This is something that has never been possible with any of our inventions as a species, because it has always been a human decision how to use them. The problem is that AI is the one deciding.