The predictions of the godfather of artificial intelligence, Geoffrey Hinton: “Machines will have feelings, they will fall in love.”

Artificial intelligence

November 06, 2024

6 Nov, 2024

Geoffrey Hinton has changed our lives. He made it possible for machines to learn on their own, and thus we have word predictors, image recognition, virtual assistants… and ChatGPT. When he received the BBVA Foundation Frontiers of Knowledge Award in 2017, we spoke with him in Toronto, and what he shared, which we revisit here, is especially revealing now that he himself admits his concern about the potential of artificial intelligence and the urgent need to control it. Back then, he didn’t see it as threatening…

Note: Hinton received the Nobel Prize in Physics in October 2024.

Geoffrey Hinton has not sat down since 2005. Literally. A serious back problem forces him to stand or lie down. He just smiles and shrugs at the challenge — “I’m used to it; you can get used to anything” — but it means this interview takes place standing up in his rundown office at the University of Toronto, in Canada. Hinton is now a prominent figure at Google, which hired him in 2013 to develop artificial intelligence, and he could have a better office, but this is where he’s worked since the eighties and has no interest in changing it.

Born in London in 1947, Hinton is called the ‘godfather’ of artificial intelligence. And it’s no casual title. The paternity of AI is highly disputed, but there’s no doubt about the godfather. His contribution is decisive. Against all logic, he chose to downplay logic in the creation of artificial intelligence. And he did so in 1972.

From a young age, Hinton has been truly interested in the brain, so his first field of study was Experimental Psychology at Cambridge. Once immersed in the keys to our mind, he decided he could replicate them and bring them to computing. He was inspired by biology to program and created what has come to be known as ‘neural networks.’ For 30 years, his proposal had no resonance within the scientific community. But as computer power increased, neural artificial intelligence started to perform better than that based solely on data accumulation. Hinton’s generative AI involves the system improving on its own as it learns (the algorithm weighs the most effective learning and modifies the strength of connections between the ‘neurons’ or nodes).

Suddenly, Hinton was a genius, a visionary. And Silicon Valley turned its eyes to him. Virtual assistants, simultaneous translators, image recognition, word predictors, driverless cars… Behind all of these is one brain: Geoffrey Hinton’s.

XLSemanal: You proposed that machines work like the human brain, and even though we still don’t fully understand how our mind works, it turns out it works… Yours is quite the achievement.

Geoffrey Hinton: We don’t know how the brain works in depth, but we do know that when it learns something, it changes the strength of connections between neurons. And we know more or less how a neuron works. So, we’ve created a computer model applying the principles of a neuron and designed a learning algorithm so that the system improves as it learns.

The Unleashed AI

Hinton comes from a British family of scientists. His grandfather was the mathematician George Boole, who laid the foundations of computational arithmetic, and his father was the renowned entomologist H. E. Hinton. He has spent 40 years working on generative artificial intelligence, when no one else believed in it. Now, with ChatGPT representing a substantial leap in that direction, Hinton has expressed his fears that a reckless step has been taken and that AI may escape human control much sooner than we think. At 75, he has left Google and moved to London, from where he wants to do everything possible to prevent the dark side of his creation from taking over.

XL: This is going to require a more detailed explanation, but it seems that artificial intelligence will not only create smarter machines, but it might also make us smarter…

G.H.: Maybe we won’t be smarter, but we will understand our brain better.

XL: But if we don’t improve our brain capabilities and machines do, they’ll end up dominating us.

G.H.: No, I don’t think so. There will be a symbiosis. Computers with neural network simulators and people will work together. I don’t think we’ll end up dominated by machines, and if that happens, it will be in a very, very distant future.

XL: Well, at the Center for Existential Risk at the University of Cambridge, created by people working on artificial intelligence, they’re not so sure. They’re studying the possibility that it could happen in 50 years.

G.H.: 50 years? It’s impossible to know what will happen in 50 years! I know the center, but you can’t make serious predictions beyond 5 years.

“Robots won’t dominate us. A less intelligent system can control a superior one. Look at babies. The mother can’t stand their crying. Something less powerful than her dominates her.”

XL: Let’s assume that possibility exists… What could we do to prevent it? Introduce a system of values into machines or create an ethical algorithm, as some suggest?

G.H.: We already have an example of a less intelligent system controlling a more intelligent one. And that’s a baby. A mother simply cannot stand her baby’s crying. She’s designed not to be able to remain indifferent. It’s an example of how something you would think is more powerful — the mother — has something inside her, built by evolution, that allows something less powerful — the baby — to control her and prevents her from abandoning it or throwing it out the window. Babies have found a way to control mothers.

XL: What you mean, I understand, is that we are the babies, and the superintelligent machines are the mothers, and despite that, we will be able to control them, is that right?

G.H.: That’s right. We will build this ‘thing’ into the mothers, that is, the machines, that they cannot resist, and it will make them turn off.

XL: Are we going to start crying?

G.H.: I don’t know what we’ll do. But we already have an example that it’s possible. If evolution did it with mothers, we’ll be able to do it with machines.

“The solution for those who lose their jobs to robots is universal basic income. Progress cannot be stopped”:

XL: You’ve said that the responsibility for what machines do is not the scientists’ but the politicians’.

G.H.: There are two different issues here. One is that machines become more intelligent than us and surpass us. If that happens, it will be in the very distant future. Another issue is that machines are so intelligent they can perform many of today’s jobs, leaving people unemployed. This issue falls to the political system. If machines can do things like dispense money in a bank, it’s intrinsically more efficient, so it should be intrinsically good for people. And what we want is for it to be better for all people, not just a few. That’s a political matter, and politicians need to solve it.

XL: Any ideas?

G.H.: Universal basic income. I’m in favor of that, for example. In fact, I think it’s the only solution. Because what’s certain is that you can’t stop progress. There’s no way to avoid ATMs. And nobody thinks they were a bad idea. The solution is to change the political system so that when more wealth is created because machines are more efficient, that wealth is shared.

XL: One of the fields where your neural networks are most effective is translation. It seems that soon we won’t need to learn languages, right?

G.H.: We’re still a bit far from perfect translations, but they’re already quite good. It will be like calculators. Nobody bothers with mental math anymore. For everyday work and business transactions, you won’t need to learn the language. However, for quality translations, you’ll still need people. And if you want to understand a culture, you’ll have to learn its language.

XL: Another use where your algorithms are very useful is in stock market predictions, meaning speculation. How will this change the financial world?

G.H.: Well, that’s already happening. There’s a lot of automatic trading, and some people are making a fortune with it.

XL: Could this unleashed speculation, based on the speed of data processing, lead us to another crisis?

G.H.: I don’t know, I’m not an expert in that area. But what I believe is that all of that should be highly regulated. The danger isn’t the machines, whether they go fast or not, the danger is that regulations are being removed.

Mechanical Intuition

25 years ago, the computer Deep Blue defeated chess champion Gary Kasparov. The next challenge was to beat Go, a Chinese game that’s difficult for a computer because it requires intuition. In 2016, Google created AlphaGo, based on Hinton’s AI. And the machine defeated the human again. It did so with ‘incomprehensible’ moves, meaning it was being creative.

XL: The great advantage of neural networks is that they are intuitive, in addition to logical. But what role do feelings play in all of this? Can a machine fall in love?

G.H.: Yes.

XL: Yes?

G.H.: Humans are machines, just very, very sophisticated ones.

XL: That statement would require a deeper explanation, but I was referring to those that are not made of flesh and bone. Can a robot fall in love?

G.H.: Of course it could. Imagine your brain. And imagine we replace each brain cell with a machine that works exactly the same as that cell. Imagine we could do that with nanotechnology. Then I replace all your brain’s neurons with tiny machines that act exactly like your neurons. Whatever you did before, this new system will do it now. If you laughed at a joke, this new system will laugh too; if you’re offended by someone’s behavior, this new system will be offended too, it will have feelings…

XL: But reproducing that isn’t possible.

G.H.: It’s not possible… today. But it is possible. The thing is, people don’t understand what it means to have feelings. It’s a philosophical problem.

XL: Well, the way you describe it doesn’t seem philosophical, but rather mechanical. Replacing neurons with chips…

G.H.: It’s philosophical. If you ask a person to explain their feelings, they’ll say something like, “I feel like I want to hit someone.” They translate it into actions they could take in the real world or talk about the causes. So when people talk about feelings, they’re not talking about something inside the head, they’re not referring to neuronal activity. Because we’re not used to it. If I say, “Your neuron 52 is very active,” that won’t mean anything to you, but if you feel like hitting someone, it’s because neuron 52 is very active. So ‘feelings’ is just a silly language to talk about states of the brain.

XL: So, we don’t understand feelings…

G.H.: We always talk in terms of their causes or effects. Not what happens in the brain.

XL: And how do we transfer those feelings to machines?

G.H.: We have a machine to which we can give inputs, and it’s capable of suppressing its actions, it can inhibit itself from acting. Normally, that machine behaves in a certain way when we give it those inputs, but now we tell the robot: “I want you not to do it, but I want you to tell me what you would do if you could do something based on those inputs.” And the machine would say: “If I could, I would move that piece.” That is, the robot feels like it wants to move a piece. The robot has a feeling. Even though it doesn’t do it. And that’s how a feeling works.

XL: And now we have a philosophical question… If you are only what your neurons are, and they come ‘pre-programmed’ when you’re born, are you responsible for your actions?

G.H.: Of course. There is no conflict between determinism and responsibility. Although that would take us to another topic. But having certain neurons in no way eliminates our responsibility for who we are and what we do.

XL: I suspect you’re not a religious person…

G.H.: That’s an accurate suspicion.

XL: You dedicated 40 years to approaching artificial intelligence as if it were the human brain without receiving any recognition. What do you say now to those who criticized you for wasting your time?

G.H.: What I tell them is that there should be better theories than the ones we use today. Many people have stopped searching for better ways of neural networks because these are working, and that’s a big mistake. The moment you’re satisfied, you’re not going anywhere.

XL: Is that why someone like you has started working for Google?

G.H.: What I do at Google is try to develop new types of neural networks, more efficient ones. They have faster computers and allow me to spend all my time researching.

XL: When you were young, you refused to work for the U.S. Army, which was very interested in your research and willing to fund it. Does working for Google not make you uneasy?

G.H.: I wouldn’t work for Google if they developed weapons.

XL: But they do business, and not always transparently…

G.H.: They make the economy more efficient.

“I don’t think we’ll end up being ruled by Silicon Valley. We will always need political leaders. If you want to change the world, study social sciences.”

XL: But we don’t know what they do with our data, if they sell it for others to sell us products… Isn’t all of that unclear, don’t you think?

G.H.: Google is very careful with what it does with personal data. And, of course, I trust Google much more than the NSA. Google was horrified when it realized the NSA was intercepting its servers. Genuinely horrified. I was there.

XL: Maybe we’ll end up having to trust the guys in Silicon Valley more than the politicians, especially in times of Trump…

G.H.: I’m not going to make any comments on Trump.

XL: Fine. But do you think we’ll end up being governed by Silicon Valley and the algorithm elite?

G.H.: No, I don’t think so. We will always need political leaders. The thing is, people like to think that when things go wrong, it’s because of political leaders, instead of thinking in terms of systems. It’s the social system and its dynamics that we should understand and organize to make it work well.

XL: What career should we study today to get a job?

G.H.: If you study neural networks, you’ll definitely find a job right now. But if you want to change the world, study social sciences.


Highlighted Opinions – Geoffrey Hinton

Article by Ana Tagarro – ABC Madrid. Published on May 8, 2023, updated by the author on September 27, 2024.

Ana Tagarro is a prominent figure in the field of research and education, especially known for her work on topics related to science and technology. She has contributed to scientific dissemination and has been involved in projects that aim to bring science closer to society.

Autor: Future Labs Research Group Selection

Autor: Future Labs Research Group Selection

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!