
Ray Kurzweil Collection (1) Interview: The Artificial Intelligence Scientist Ray Kurzweil: “We are going to multiply intelligence by a million by 2045”
The Google futurist talks about nanobots and avatars, deepfakes and elections, and why he is so optimistic about a future in which we will merge with computers.
The American computer scientist and techno-optimist Ray Kurzweil is a long-established authority on artificial intelligence (AI). His successful 2005 book, The Singularity Is Near, sparked the imagination with sci-fi predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhumans around 2045, a concept he called “the singularity.” Now, nearly 20 years later, Kurzweil, 76, has a sequel, The Singularity Is Nearer, and some of his predictions no longer seem so outlandish. Kurzweil’s daily work is that of principal researcher and AI visionary at Google. He spoke with The Observer in his personal capacity as an author, inventor, and futurist.
Why write this book?
The Singularity Is Near spoke about the future, but 20 years ago, when people didn’t know what AI was, it was clear to me what would happen—but not to everyone. Now AI is dominating the conversation. It’s time to take another look at the advances we have achieved (large language models are quite nice to use) and at the advances that are coming.
His projections for 2029 and 2045 haven’t changed…
I have remained constant. So, 2029—both for human-level intelligence and for general AI (AGI), which is a bit different. Human-level intelligence generally means an AI that has reached the capacity of the most skilled humans in a particular domain, and by 2029 that will have been achieved in most aspects. (There may be a few transition years after 2029 in which AI hasn’t yet surpassed the best humans in some key skills, such as writing Oscar-winning scripts or generating new and deep philosophical ideas, although it will eventually do so.) AGI means an AI that can do everything any human can do, but at a higher level. AGI sounds more difficult, but it’s arriving at the same time. And my five-year estimate is actually conservative: Elon Musk recently said it will happen in two years.
We need to be aware of the potential this entails and monitor what AI is doing, but simply opposing it doesn’t make sense.
Why should we believe your data?
I am the only person who predicted the tremendous interest we are seeing today in AI. In 1999, people thought it would take a century or more. I said 30 years, and look at what we have. The most important factor is the exponential growth in the amount of processing power per constant-dollar cost. We are doubling the price-performance ratio every 15 months. LLMs started working barely two years ago due to the increase in computing power.
What is currently missing for AI to reach where you predict it will be in 2029?
One of them is more computing power, and that is coming. That will allow improvements in contextual memory, common sense reasoning, and social interaction, which are all areas where deficiencies still exist. Then we need better algorithms and more data to answer more questions. LLM hallucinations [where they generate nonsensical or inaccurate results] will become a much smaller problem, certainly by 2029; they already occur much less than they did two years ago. The problem occurs because they don’t have the answer and don’t know it. They search for the best, which might be incorrect or inappropriate. As AI becomes smarter, it will be able to understand its own knowledge more accurately and inform humans precisely when it doesn’t know.
What exactly is the Singularity?
Today, we have a brain size that we cannot overcome to become smarter. But the cloud is becoming smarter and is truly growing without limits. The Singularity, a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We will be a combination of our natural intelligence and our cybernetic intelligence, and everything will merge into one. To make this possible, brain-computer interfaces will be necessary that, ultimately, will be nanobots (robots the size of molecules) that will enter our brain non-invasively through the capillaries. We are going to expand intelligence a million times by 2045, and that is going to deepen our consciousness and our knowledge.
It’s hard to imagine what this would be like, but it doesn’t sound very appealing…
Think of it as if you had your phone, but in your brain. If you ask a question, your brain will be able to search the cloud for an answer similar to what you do now on your phone, only it will be instantaneous, with no input or output issues, and you won’t even notice you did it (the answer will simply appear). People say, “I don’t want that”: they thought they didn’t want phones either!
What about the existential risk of advanced AI systems that could obtain unforeseen powers and seriously harm humanity? The “godfather” of AI, Geoffrey Hinton, left Google last year, partly due to those concerns, while other high-profile technology leaders like Elon Musk have also issued warnings. Earlier this month, workers from OpenAI and Google DeepMind called for greater protections for whistleblowers raising safety concerns.
I have a chapter on the dangers. I have been involved in trying to find the best way to move forward and helped develop the Asilomar AI Principles [a set of legally non-binding guidelines from 2017 for the responsible development of AI]. We need to be aware of the potential here and monitor what AI is doing. But simply being against it is not sensible: the advantages are so profound.
Won’t there be physical limits to processing power that could hold technology back?
The computing we have today is basically perfect: it will improve every year and remain at that level. There are many ways to keep improving chips. We have only just begun using the third dimension [creating 3D chips], which will allow us to move forward for many years. I don’t think we will need quantum computing: we have never been able to demonstrate its value.
You maintain that the Turing test—where an AI can communicate by text indistinguishably from a human—will be passed in 2029. But to pass it, the AI will have to become dumber. How?
Humans aren’t that precise and don’t know everything! Today, you can ask an LLM very specific questions about any theory in any field and it will respond very intelligently. But who could do that? If a human answered like that, you’d know it’s a machine. So that is the point of making it dumber, because the test is trying to imitate a human. Some people report that GPT-4 can pass a Turing test. I think we have a few more years until we resolve this issue.
Not everyone will be able to afford the future technology you envision. Are you worried about technological inequality?
Being rich allows you to afford these technologies at an early stage, but also at a time when they don’t work very well. When phones [mobile phones] were new, they were very expensive and also worked terribly. They provided very little information and didn’t communicate with the cloud. Now they are very affordable and extremely useful. Approximately three-quarters of the world’s population has one. So the same will happen here: this problem will disappear over time.
The book analyzes in detail the potential of AI to eliminate jobs. Should we be worried?
Yes and no. Certain types of jobs will be automated and people will be affected, but new capabilities also create new jobs. A job like “social media influencer” made no sense—not even ten years ago. Today we have more jobs than ever and the average personal income per hour worked in the United States is ten times higher than it was a hundred years ago, adjusted for today’s dollars. Universal basic income will begin to be applied in the 2030s, which will help cushion the impact of labor disruptions. It won’t be enough at first, but over time it will be.
There are other alarming ways—beyond job loss—in which AI promises to transform the world: by spreading disinformation, causing harm through biased algorithms, and enhancing surveillance. Not much is said about that…
We have to solve certain types of problems. We have elections coming up and deepfake videos are a concern. I think we can figure out [what is false], but if it happens right before the elections we won’t have time. As for bias issues, AI is learning from humans and humans are biased. We are making progress, but we’re not where we want to be. There are also issues regarding the fair use of data by AI that must be resolved through the legal process.
What do you do at Google? Did the book go through any pre-publication review?
I advise them on different ways they can improve their products and advance their technology, including LLMs. The book is written in a personal capacity. Google is happy that I publish these things and there was no review.
Many people will be skeptical about your predictions concerning physical and digital immortality. You predict that in the 2030s, medical nanobots will arrive that can enter our bodies and perform repairs so that we can remain alive indefinitely, as well as “beyond-death” technology that will come in the 2040s that will allow us to upload our minds so they can be restored—even placed in convincing androids—if we experience biological death.
Everything is progressing exponentially: not only computing power, but also our understanding of biology and our ability to design at much smaller scales. In the early 2030s, we can expect to reach the longevity escape velocity, where every year of life lost to aging will be regained thanks to scientific progress. And as we move forward, we will actually regain more years. It is not a foolproof guarantee of living forever—there are still accidents—but the probability of dying will not increase from one year to the next. The ability to digitally resurrect deceased humans will raise some interesting social and legal questions.
What is your own plan for immortality?
My first plan is to stay alive, thus reaching the longevity escape velocity. I take about 80 pills a day to help me stay healthy. Cryogenic freezing is the alternative. I also intend to create a replica of myself [an AI avatar for the afterlife], which is an option I believe we will all have by the late 2020s. I did something like that with my father, compiling everything he had written during his life, and it was a bit like talking to him. [My replica] will be able to draw on more material and thus represent my personality more faithfully.
What should we do now to better prepare for the future?
It’s not about us versus AI: AI is infiltrating us, and it will allow us to create new things that were not feasible before. It will be a fantastic future.
Colaboración de Zoe Corbyn (periodista británica especializada en temas de ciencia, tecnología, educación superior y cultura. Sus artículos suelen aparecer en medios destacados como The Guardian, Nature, New Scientist, y otros. Su trabajo abarca temas complejos y de actualidad en el ámbito científico, presentados de manera accesible para un público general)
The Observer
Traducción Equipo técnico de Future Lab