For a while, a vision of the destructive potential of Artificial Intelligence has been offered. Sam Altman has been clear about this: AI can be problematic, but also incredibly constructive
Sam Altman is undoubtedly one of the key figures of this decade. Since the global emergence of ChatGPT in the market, things have only accelerated year after year with the unrelenting success of Artificial Intelligence (AI) models. However, it has also sparked a new fear, specifically of an apocalypse caused by AI, which is why many wealthy individuals are buying their own bunkers.
Technology: Now, with the Cold War-like rivalry between the United States and China, the fear is even greater, especially after the emergence of a Chinese AI like DeepSeek in the global market. Now that Sam Altman is in the spotlight due to his confrontations with Elon Musk, who wants to buy OpenAI, it’s time to revisit an interview he gave in 2023, at the dawn of AI, where he explained the dangers of general artificial intelligence (AGI) and how no bunker could save humanity if it slipped out of human control.
A malignant AGI and the end of humanity: our transformative potential and intelligence. This is not strange, but we have never had to deal with a species more intelligent than us. The goal of general artificial intelligence is for it to be, which generates some fears. Humans have become the dominant species largely because of this.
In this way, Altman clarified in 2023 his vision about the destructive potential of AI, although he wants to avoid this situation and continue developing AI in an efficient and safe way for all. He recently stated that he had been on the wrong side of history.
0 Comments