Predictions of Judea Pearl.

Artificial intelligence

November 09, 2024

9 Nov, 2024

It has revolutionized artificial intelligence and is now ready to revolutionize our lives. This computational engineer and philosopher has laid the mathematical foundations for robots to think and feel like humans, not just accumulate data. For his discoveries, he has just received the BBVA Frontiers of Knowledge Award.

Author: Ana Tagarro. ABC Madrid.

He has an impressive resume. The Turing Award – the Nobel of mathematics – a PhD in Engineering, a Master’s in Physics, awards in Psychology, Statistics, and Philosophy, and now, the BBVA Foundation Frontiers of Knowledge Award in Communication Technologies. And, if that wasn’t enough, he’s a gifted pianist. Judea Pearl, however, prefers to define himself as a poet. After all, he makes metaphors with equations. In the 1980s, he developed a mathematical language, Bayesian networks, which are essential in every computer today. But now, at 87 years old, he declares himself an ‘apostate’ of artificial intelligence. Why? Well, precisely because of that why. It’s not a wordplay. Pearl states that unless we teach machines to understand cause-and-effect relationships, in their very complex variations, they won’t think like us. And he knows how to achieve it. He explains it to us from his home in Los Angeles. There, at the University of California, he is still a professor. As lucid as the young Israeli, trained in a small biblical town, who arrived in sunny California 60 years ago.

XL: Your goal is to build machines with human-level intelligence, that think like us.

Judea Pearl: Yes, because until now we haven’t made machines that ‘think’. They only simulate some aspects of human thinking. “Between humans and machines, only the ‘hardware’ is different; the ‘software’ is the same. Perhaps there could be one difference: the fear of death. But I don’t know…”

XL: And to make machines think, you claim they must think in terms of causes and effects, asking themselves ‘why’.

J.P.: Yes, but there are levels. It’s what we call ‘the ladder of causality’. Current machines only create associations between what was observed before and what will be observed in the future. This is what allows eagles or snakes to hunt their prey. They know where the mouse will be in five seconds.

XL: But that’s not enough…

J.P.: No. There are two levels above that ladder that machines don’t do. One is predicting actions that have never been carried out before under the same conditions.

XL: But there’s more…

J.P.: The next step is retrospection. For example: I took an aspirin, and my headache went away. Did the aspirin relieve the pain, or was it the good news my wife gave me when I took it? Thinking in this line: would an event have taken place if another event in the past had not occurred? For now, this is something only humans do.

The ladder of artificial intelligence. The ultimate leap of machines:

XL: Because until now, this way of thinking couldn’t be translated into mathematical formulas, but now it can, thanks to you…

J.P.: Yes, now we have mathematical tools that allow us to reason in all three levels. It’s just a matter of applying them to artificial intelligence.

XL: Let me clarify what you said; does that mean you translate imagination, responsibility, and even guilt into equations?

J.P.: Yes, correct.

XL: Correct and mind-blowing, right? Robots will be able to imagine things that don’t exist. And you yourself say that this capacity has been key to the human domination over other species. Will machines now do it?

J.P.: Correct, totally. Humans created this ‘market of promises,’ convincing someone to do something in exchange for a promise of the future. And machines will be able to do that.

“We create robots for the same reason we have children. To replicate ourselves. And we raise them in the hope that they will have our values. And most of the time, it works out well.”

XL: You confidently claim, for example, that robots will play soccer and say things like, “You should have passed me the ball earlier.”

J.P.: Yes, of course, and soccer will be much better. Robots will communicate like humans. They will have their own will, desires… I’m surprised that this surprises you [laughs].

XL: What surprises me is how naturally you speak about these ‘human’ machines…

J.P.: Look, I’ve been in artificial intelligence for over 50 years. I grew up with the clear idea that anything we can do, machines will be able to do. I don’t see any obstacle, none.

XL: But then, what sets us apart from machines?

J.P.: That we are made of organic matter and machines are made of silicon. The hardware is different, but the software is the same.

“Artificial intelligence has the potential to be terrifying and the potential to be extremely convenient. For now, it’s just ‘new.’ It’s too early to legislate.”

XL: Not much difference…

J.P.: There might be one difference: the fear of death. But I’m not sure that makes a big difference, maybe.

XL: And falling in love?

J.P.: Machines can fall in love. Marvin Minsky has an entire book on the emotions of machines, The Emotion Machine, it’s from years ago…

XL: That’s a bit scary…

J.P.: It’s not scary; it’s just new. It has the potential to be terrifying and the potential to be extremely convenient. For now, it’s just ‘new.’

XL: Will machines be able to distinguish between right and wrong?

J.P.: Yes, with the same reliability as humans; maybe even more. The analogy I like is that of our children. We believe they will think like us; we raise them in the hope that we will instill our values in them. And still, there is a risk that my child could turn out to be another Putin. But we all go through the process of raising our children hoping they will acquire our values. And it usually works out well…

XL: But is anyone working on the ethical and moral foundations of this artificial intelligence?

J.P.: A lot of people, yes. But I think it’s too early to legislate.

XL: I would say it’s late…

J.P.: We have a new kind of machine. We need to observe it because we still don’t know how it will evolve. And we can’t legislate out of fear, from unfounded fears.

XL: But you yourself mention that the creators of a highly successful artificial intelligence, AlphaGo from DeepMind, don’t know why it’s so effective, that they don’t ‘control’ their creation…

J.P.: Correct. But look, we don’t know how the human mind works either. We don’t know how our children will develop their minds, and still, we trust them. And do you know why? Because they work like us. And we think: they probably think like I do. And that’s what will happen with machines.

XL: But then children turn out how they want… Although you defend that free will is “an illusion.” And here we were thinking we made choices! What a disappointment…

J.P.: For you, it’s a disappointment, for me it’s a great comfort. Since Aristotle and Maimonides, philosophers have been trying to reconcile the idea of God with free will. A God who predicts the future, who knows what’s good and what’s bad, and yet punishes us for doing things He’s programmed us to do. This is a terrible ethical problem that we couldn’t solve.

XL: And you’re going to solve it with artificial intelligence?

J.P.: Of course, because the first premise is that there is no free will. We have the illusion that we are in control when we make decisions, but that’s not the case. The decision has already been made in the brain. It’s our neurons that tell us how to act, the ones that, through excitement or nervousness, make me move my hand or scratch my nose. It’s deterministic, and there’s no divine force behind it.

“We’ll have implants, and they will interact with those of other people. It’s scary, huh? [laughs]. But we all already have implants: they’re called ‘language,’ ‘culture’… we are born with them.”

XL: What can we do to teach or learn mathematics better?

J.P.: Bill Gates asked me the same thing. And I looked at my own education. I was lucky to have excellent teachers, German Jews who came to Tel Aviv fleeing the Nazi regime. They taught science and mathematics chronologically, not logically. When they told us about Archimedes, how he jumped out of the bathtub shouting “Eureka, Eureka!”, we got involved. The basis of our intelligence is stories, narratives, because they connect people. Stories make history. It’s easier to implant abstract ideas like mathematics through stories, through narratives.

XL: And what about philosophy, which is now being sidelined in education?

J.P.: It’s terrible. Philosophy is very important. It connects us with at least 80 generations of thinkers. It creates a common language, builds civilization.

XL: But it’s not useful for finding a job… or so they say. And priority is given to engineering, which makes those robots that are precisely going to take our jobs…

J.P.: Yes, that’s already happening. And it will happen more. There are two aspects to this: one is how we’re going to feel useful when we don’t have a job. The other is how we’ll live, how we’ll earn a salary. The second is an economic and management issue. I don’t have a solution for that. But there is one. It will come.

XL: And for the first one?

J.P.: We can solve it. I’m 87 years old, I’m useless, and I find joy every hour of the day.

XL: [Laughs] You are definitely not useless, and you know it.

J.P.: Look, almost everything is illusory. I live with the illusion of my environment’s response, from my children, my students. If I give a lecture, I feel happy because I have the illusion that it benefits someone. It’s possible to create illusions. One creates them for oneself.

“The basis of our intelligence is stories, narratives, because they connect people. Stories make history. It’s easier to implant abstract ideas like mathematics through narratives.”

XL: We were talking earlier about good and evil. You have suffered evil in an unimaginable way, when your son was murdered (see box); now there is a war… Can machines change that, make us better?

J.P.: I don’t have the answer. But maybe, when we implement empathy or remorse in machines, we’ll understand how they form in us and be able to become somewhat better.

XL: And what do you think about incorporating technology into our bodies? Becoming transhuman…

J.P.: I see no obstacle to that. We’ll have implants, and they’ll interact with implants from other people or other agents.

XL: Would you like to have an implant in your brain?

J.P.: Scary, huh? [laughs]. I already have an implant. We all do: they’re called ‘language,’ ‘culture’… we are born with them. But since we’re used to them, they don’t surprise us.

XL: But why do you insist on making machines smarter than us?

J.P.: Because we’re trying to replicate and amplify ourselves.

XL: For what?

J.P.: For the same reason we have children.

XL: I get the analogy, but we used to create machines to help us; now they’re replacing us.

J.P.: No, no. We create machines to help us. They will replace us, yes. But we created them to help us [laughs]. Though they’ll surpass us.

XL: Is there a mathematical formula for justice?

J.P.: There has to be. That way, there would be no ambiguity, and no dictator could tell us what is just. To fight a Putin, we’d need more mathematics.

“I don’t make predictions, but the future is going to be totally different, a revolution. I’m optimistic, even though I don’t know where it will take us.”

XL: You have a lot of old books.

J.P.: I collect them. I have a first edition of Galileo [he picks it up].

XL: You travel through time. You go from those books to artificial intelligence. I can’t help but ask, even though you told me not to, how do you see the world in 10 or 20 years…

J.P.: [Laughs]. I don’t make predictions. But it’s going to be totally different, a revolution. I don’t know where it will take us, but I’m optimistic. Though it’s sad that my grandchildren won’t enjoy, for example, reading my old books. The cultural gap between generations will grow. And that worries me. Because they’re going to lose all the wisdom we transmitted from parents to children.

XL: And you say that, while making thinking robots!

J.P.: Yes, but I make thinking machines to understand how we think.

XL: What advice would you give to the ‘still salvable’ young people?

J.P.: Read history.

XL: Read? You’re too optimistic…

J.P.: Alright, let them watch documentaries. About civilizations, evolution, how we became who we are. Be curious! That’s my advice: try to understand things for yourselves.

Artificial Intelligence – Interview with Judea Pearl

Autor: Research Team from the Laboratory of the Future

Autor: Research Team from the Laboratory of the Future

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!