Predictions of Judea Pearl.

Predictions of Judea Pearl.

It has revolutionized artificial intelligence and is now ready to revolutionize our lives. This computational engineer and philosopher has laid the mathematical foundations for robots to think and feel like humans, not just accumulate data. For his discoveries, he has just received the BBVA Frontiers of Knowledge Award.

Author: Ana Tagarro. ABC Madrid.

He has an impressive resume. The Turing Award – the Nobel of mathematics – a PhD in Engineering, a Master’s in Physics, awards in Psychology, Statistics, and Philosophy, and now, the BBVA Foundation Frontiers of Knowledge Award in Communication Technologies. And, if that wasn’t enough, he’s a gifted pianist. Judea Pearl, however, prefers to define himself as a poet. After all, he makes metaphors with equations. In the 1980s, he developed a mathematical language, Bayesian networks, which are essential in every computer today. But now, at 87 years old, he declares himself an ‘apostate’ of artificial intelligence. Why? Well, precisely because of that why. It’s not a wordplay. Pearl states that unless we teach machines to understand cause-and-effect relationships, in their very complex variations, they won’t think like us. And he knows how to achieve it. He explains it to us from his home in Los Angeles. There, at the University of California, he is still a professor. As lucid as the young Israeli, trained in a small biblical town, who arrived in sunny California 60 years ago.

XL: Your goal is to build machines with human-level intelligence, that think like us.

Judea Pearl: Yes, because until now we haven’t made machines that ‘think’. They only simulate some aspects of human thinking. “Between humans and machines, only the ‘hardware’ is different; the ‘software’ is the same. Perhaps there could be one difference: the fear of death. But I don’t know…”

XL: And to make machines think, you claim they must think in terms of causes and effects, asking themselves ‘why’.

J.P.: Yes, but there are levels. It’s what we call ‘the ladder of causality’. Current machines only create associations between what was observed before and what will be observed in the future. This is what allows eagles or snakes to hunt their prey. They know where the mouse will be in five seconds.

XL: But that’s not enough…

J.P.: No. There are two levels above that ladder that machines don’t do. One is predicting actions that have never been carried out before under the same conditions.

XL: But there’s more…

J.P.: The next step is retrospection. For example: I took an aspirin, and my headache went away. Did the aspirin relieve the pain, or was it the good news my wife gave me when I took it? Thinking in this line: would an event have taken place if another event in the past had not occurred? For now, this is something only humans do.

The ladder of artificial intelligence. The ultimate leap of machines:

XL: Because until now, this way of thinking couldn’t be translated into mathematical formulas, but now it can, thanks to you…

J.P.: Yes, now we have mathematical tools that allow us to reason in all three levels. It’s just a matter of applying them to artificial intelligence.

XL: Let me clarify what you said; does that mean you translate imagination, responsibility, and even guilt into equations?

J.P.: Yes, correct.

XL: Correct and mind-blowing, right? Robots will be able to imagine things that don’t exist. And you yourself say that this capacity has been key to the human domination over other species. Will machines now do it?

J.P.: Correct, totally. Humans created this ‘market of promises,’ convincing someone to do something in exchange for a promise of the future. And machines will be able to do that.

“We create robots for the same reason we have children. To replicate ourselves. And we raise them in the hope that they will have our values. And most of the time, it works out well.”

XL: You confidently claim, for example, that robots will play soccer and say things like, “You should have passed me the ball earlier.”

J.P.: Yes, of course, and soccer will be much better. Robots will communicate like humans. They will have their own will, desires… I’m surprised that this surprises you [laughs].

XL: What surprises me is how naturally you speak about these ‘human’ machines…

J.P.: Look, I’ve been in artificial intelligence for over 50 years. I grew up with the clear idea that anything we can do, machines will be able to do. I don’t see any obstacle, none.

XL: But then, what sets us apart from machines?

J.P.: That we are made of organic matter and machines are made of silicon. The hardware is different, but the software is the same.

“Artificial intelligence has the potential to be terrifying and the potential to be extremely convenient. For now, it’s just ‘new.’ It’s too early to legislate.”

XL: Not much difference…

J.P.: There might be one difference: the fear of death. But I’m not sure that makes a big difference, maybe.

XL: And falling in love?

J.P.: Machines can fall in love. Marvin Minsky has an entire book on the emotions of machines, The Emotion Machine, it’s from years ago…

XL: That’s a bit scary…

J.P.: It’s not scary; it’s just new. It has the potential to be terrifying and the potential to be extremely convenient. For now, it’s just ‘new.’

XL: Will machines be able to distinguish between right and wrong?

J.P.: Yes, with the same reliability as humans; maybe even more. The analogy I like is that of our children. We believe they will think like us; we raise them in the hope that we will instill our values in them. And still, there is a risk that my child could turn out to be another Putin. But we all go through the process of raising our children hoping they will acquire our values. And it usually works out well…

XL: But is anyone working on the ethical and moral foundations of this artificial intelligence?

J.P.: A lot of people, yes. But I think it’s too early to legislate.

XL: I would say it’s late…

J.P.: We have a new kind of machine. We need to observe it because we still don’t know how it will evolve. And we can’t legislate out of fear, from unfounded fears.

XL: But you yourself mention that the creators of a highly successful artificial intelligence, AlphaGo from DeepMind, don’t know why it’s so effective, that they don’t ‘control’ their creation…

J.P.: Correct. But look, we don’t know how the human mind works either. We don’t know how our children will develop their minds, and still, we trust them. And do you know why? Because they work like us. And we think: they probably think like I do. And that’s what will happen with machines.

XL: But then children turn out how they want… Although you defend that free will is “an illusion.” And here we were thinking we made choices! What a disappointment…

J.P.: For you, it’s a disappointment, for me it’s a great comfort. Since Aristotle and Maimonides, philosophers have been trying to reconcile the idea of God with free will. A God who predicts the future, who knows what’s good and what’s bad, and yet punishes us for doing things He’s programmed us to do. This is a terrible ethical problem that we couldn’t solve.

XL: And you’re going to solve it with artificial intelligence?

J.P.: Of course, because the first premise is that there is no free will. We have the illusion that we are in control when we make decisions, but that’s not the case. The decision has already been made in the brain. It’s our neurons that tell us how to act, the ones that, through excitement or nervousness, make me move my hand or scratch my nose. It’s deterministic, and there’s no divine force behind it.

“We’ll have implants, and they will interact with those of other people. It’s scary, huh? [laughs]. But we all already have implants: they’re called ‘language,’ ‘culture’… we are born with them.”

XL: What can we do to teach or learn mathematics better?

J.P.: Bill Gates asked me the same thing. And I looked at my own education. I was lucky to have excellent teachers, German Jews who came to Tel Aviv fleeing the Nazi regime. They taught science and mathematics chronologically, not logically. When they told us about Archimedes, how he jumped out of the bathtub shouting “Eureka, Eureka!”, we got involved. The basis of our intelligence is stories, narratives, because they connect people. Stories make history. It’s easier to implant abstract ideas like mathematics through stories, through narratives.

XL: And what about philosophy, which is now being sidelined in education?

J.P.: It’s terrible. Philosophy is very important. It connects us with at least 80 generations of thinkers. It creates a common language, builds civilization.

XL: But it’s not useful for finding a job… or so they say. And priority is given to engineering, which makes those robots that are precisely going to take our jobs…

J.P.: Yes, that’s already happening. And it will happen more. There are two aspects to this: one is how we’re going to feel useful when we don’t have a job. The other is how we’ll live, how we’ll earn a salary. The second is an economic and management issue. I don’t have a solution for that. But there is one. It will come.

XL: And for the first one?

J.P.: We can solve it. I’m 87 years old, I’m useless, and I find joy every hour of the day.

XL: [Laughs] You are definitely not useless, and you know it.

J.P.: Look, almost everything is illusory. I live with the illusion of my environment’s response, from my children, my students. If I give a lecture, I feel happy because I have the illusion that it benefits someone. It’s possible to create illusions. One creates them for oneself.

“The basis of our intelligence is stories, narratives, because they connect people. Stories make history. It’s easier to implant abstract ideas like mathematics through narratives.”

XL: We were talking earlier about good and evil. You have suffered evil in an unimaginable way, when your son was murdered (see box); now there is a war… Can machines change that, make us better?

J.P.: I don’t have the answer. But maybe, when we implement empathy or remorse in machines, we’ll understand how they form in us and be able to become somewhat better.

XL: And what do you think about incorporating technology into our bodies? Becoming transhuman…

J.P.: I see no obstacle to that. We’ll have implants, and they’ll interact with implants from other people or other agents.

XL: Would you like to have an implant in your brain?

J.P.: Scary, huh? [laughs]. I already have an implant. We all do: they’re called ‘language,’ ‘culture’… we are born with them. But since we’re used to them, they don’t surprise us.

XL: But why do you insist on making machines smarter than us?

J.P.: Because we’re trying to replicate and amplify ourselves.

XL: For what?

J.P.: For the same reason we have children.

XL: I get the analogy, but we used to create machines to help us; now they’re replacing us.

J.P.: No, no. We create machines to help us. They will replace us, yes. But we created them to help us [laughs]. Though they’ll surpass us.

XL: Is there a mathematical formula for justice?

J.P.: There has to be. That way, there would be no ambiguity, and no dictator could tell us what is just. To fight a Putin, we’d need more mathematics.

“I don’t make predictions, but the future is going to be totally different, a revolution. I’m optimistic, even though I don’t know where it will take us.”

XL: You have a lot of old books.

J.P.: I collect them. I have a first edition of Galileo [he picks it up].

XL: You travel through time. You go from those books to artificial intelligence. I can’t help but ask, even though you told me not to, how do you see the world in 10 or 20 years…

J.P.: [Laughs]. I don’t make predictions. But it’s going to be totally different, a revolution. I don’t know where it will take us, but I’m optimistic. Though it’s sad that my grandchildren won’t enjoy, for example, reading my old books. The cultural gap between generations will grow. And that worries me. Because they’re going to lose all the wisdom we transmitted from parents to children.

XL: And you say that, while making thinking robots!

J.P.: Yes, but I make thinking machines to understand how we think.

XL: What advice would you give to the ‘still salvable’ young people?

J.P.: Read history.

XL: Read? You’re too optimistic…

J.P.: Alright, let them watch documentaries. About civilizations, evolution, how we became who we are. Be curious! That’s my advice: try to understand things for yourselves.

Artificial Intelligence – Interview with Judea Pearl

Series The Pioneers of Artificial Intelligence 11

Series The Pioneers of Artificial Intelligence 11

The American John Hopfield and the British Geoffrey Hinton were distinguished for their advancements in artificial neural networks, a computational structure inspired by the functioning of the brain.

The Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to American John Hopfield and British Geoffrey Hinton for their fundamental contributions to the development of machine learning, considered a key tool for Artificial Intelligence (AI) as we know it today.

Hopfield was born in 1933 in Chicago and conducts his research at Princeton University, USA. Hinton was born in 1947 in London and is a researcher at the University of Toronto, Canada.

In presenting the laureates, the Nobel committee highlighted that “although computers cannot think, machines can now imitate functions such as memory and learning. This year’s Nobel laureates in Physics have contributed to making this possible.”

Using principles of physics, both scientists achieved key breakthroughs that laid the foundations for artificial neural networks, a computational structure inspired by the functioning of the brain. This discovery not only changed the way machines process and store information, but it was also crucial for the development of modern Artificial Intelligence (AI), particularly in deep learning.

The work of Hopfield, from Princeton University, and Hinton, from the University of Toronto, is deeply related to concepts from physics and biology. Although today we associate machine learning with computers and algorithms, the first steps toward the creation of artificial neural networks stemmed from the desire to understand how the human brain works and processes information. Hopfield, a theoretical physicist, played a decisive role in applying physical concepts to neuroscience to explain how the brain can store and retrieve information.

In 1982, he developed the Hopfield network, an artificial neural network model that can store patterns of information and later retrieve them even when they are incomplete or altered. This concept, known as associative memory, mimics the human ability to recall, for example, a word that is on the tip of the tongue, processing other nearby meanings until the correct one is found.

Hopfield applied physical knowledge, such as the principles governing atomic spin systems, to create his network. In physics, spin is a property of subatomic particles that generates a magnetic field. Inspired by this behavior, Hopfield designed a system in which neurons, or nodes, were interconnected with varying intensities, similar to how the atoms in a magnetic material influence the directions of their neighboring spins.

This approach allowed the network to efficiently associate and reconstruct patterns, a revolutionary idea that marked the beginning of a new era in neural computation.

Inspired by neuroscience, Hopfield designed a model that reproduces even incomplete patterns, applying physical principles similar to the behavior of magnetic materials (Illustrative Image Infobae).

The Hopfield network represents a significant advance because it is based on a system capable of storing multiple patterns simultaneously. When an incomplete pattern is presented to it, the network can find the closest one among those it has already memorized and reconstructed. This process resembles rolling a ball across a landscape of peaks and valleys: if the ball is dropped near a valley (pattern), it will roll to the bottom, where it will find the closest pattern.

In technical terms, the network is programmed with a black-and-white image by assigning binary values to each node (0 for black, 1 for white). Then, an energy formula is used to adjust the connections between the nodes, allowing the network to reduce the system’s total energy and eventually reach a stable state where the original pattern has been recreated. This approach was not only novel but also proved to be scalable: the network could store and differentiate multiple images, opening the door to a form of distributed information storage that would later inspire advancements in artificial intelligence.

While Hopfield developed his network, Geoffrey Hinton explored how machines could learn to process patterns similarly to humans, finding their own categories without the need for explicit instructions.

Hinton pioneered the Boltzmann machine, a type of neural network that uses principles of statistical physics to discover structures in large amounts of data.

Statistical physics deals with systems made up of many similar elements, like the molecules of a gas, whose individual states are unpredictable but can be collectively analyzed to determine properties like pressure and temperature. Hinton leveraged these concepts to design a machine that could analyze the probability of a specific set of connections in a network occurring, based on the overall energy of the network. Inspired by Ludwig Boltzmann’s equation, Hinton used this formula to calculate the probability of different configurations within the network.

The Boltzmann machine has two types of nodes: visible and hidden. The former receive the initial information, while the hidden nodes generate patterns from that information, adjusting the network’s connections so that the trained examples are most likely to occur. In this way, the machine learns from examples, not instructions, and can recognize patterns even when the information is new but resembles previously seen examples.

The work of Hopfield and Hinton not only revitalized interest in neural networks, but also paved the way for the development of deep learning, a branch of AI that today drives much of the technological innovations, from virtual assistants to autonomous vehicles.

Deep neural networks, which are models with many layers of neurons, owe their existence to these early breakthroughs in artificial neural networks.

Today, neural networks are essential tools for analyzing vast amounts of data, identifying complex patterns in images and sounds, and improving decision-making in fields ranging from medicine to astrophysics.

For example, in particle physics, artificial neural networks were key in discovering the Higgs boson, an achievement awarded the Nobel Prize in Physics in 2013. Similarly, machine learning has helped improve the detection of gravitational waves, another recent scientific milestone.

Thanks to the discoveries of Hopfield and Hinton, AI continues to evolve at a rapid pace. In the field of molecular biology, for instance, neural networks are used to predict protein structures, which has direct implications in drug development. Additionally, in renewable energy, networks are being used to design materials with better properties for more efficient solar cells.

Pioneers of Artificial Intelligence McCarthy 10

Pioneers of Artificial Intelligence McCarthy 10

John McCarthy, born in 1927 and passed away in October 2011, was an American mathematician and computer scientist who was renowned for coining the term “artificial intelligence” and for his pioneering contributions to the development of this field. John McCarthy’s legacy is immense. His ideas and contributions have influenced generations of researchers in artificial intelligence. McCarthy is considered one of the fathers of artificial intelligence due to his vision, technical contributions, and role in the founding of this field. His legacy continues to inspire researchers around the world in their pursuit of creating intelligent machines.

The Dartmouth Conference: In 1956, McCarthy organized the Dartmouth Conference, a historic event where the leading researchers of the time gathered to discuss the possibility of creating intelligent machines. This conference marked the formal birth of artificial intelligence as a field of study.

The Dartmouth Conference was an academic meeting that took place in the summer of 1956 at Dartmouth College in Hanover, New Hampshire. Organized by a group of computer scientists, including John McCarthy, Marvin Minsky, Claude Shannon, and Allen Newell, this conference marked the formal birth of artificial intelligence as a field of study.

The main goal of the conference was to explore the possibility of creating machines capable of performing tasks that had previously been considered exclusive to humans, such as reasoning, learning, and problem-solving. The organizers hypothesized that it was possible to simulate any aspect of learning or any other characteristic of intelligence in a machine.

Although the group of participants was relatively small, it was composed of some of the brightest computer scientists of the time. Among them were:

  • John McCarthy: the main organizer and the person who coined the term “artificial intelligence.”
  • Marvin Minsky: founder of the MIT Artificial Intelligence Laboratory.
  • Claude Shannon: considered the father of information theory.
  • Allen Newell and Herbert Simon: pioneers in the field of symbolic artificial intelligence and creators of the Logic Theorist program.

During the six weeks of the conference, the participants discussed a wide range of topics related to artificial intelligence, including:

  • Automation of creative processes: how to make machines capable of writing music, composing poetry, or creating works of art.
  • Simulation of mental processes: how to model human thinking in a machine.
  • Development of programming languages: the need to create programming languages suitable for research in artificial intelligence.
  • Machine learning: how to make machines learn from experience.
  • Computational neuroscience: the relationship between artificial intelligence and the functioning of the human brain.

This conference had a lasting impact on the development of artificial intelligence. Some of the most important outcomes were:

  • The birth of a field: the conference solidified artificial intelligence as an academic and scientific discipline.
  • The creation of research laboratories: following the conference, numerous research laboratories in artificial intelligence were founded around the world.
  • The development of new programming languages: Lisp, one of the most important programming languages for AI, was developed in the years following the conference.
  • Funding for research: the conference generated significant interest in artificial intelligence and attracted substantial investment for research in the field.

In summary, the Dartmouth Conference was a seminal event that laid the foundation for the development of artificial intelligence as we know it today. Thanks to this conference, a group of visionary scientists took the first steps toward creating intelligent machines.

The Term “Artificial Intelligence”: It was at this conference that McCarthy proposed the term “artificial intelligence” to describe the science and engineering of creating intelligent machines.

McCarthy developed the Lisp programming language, one of the first languages specifically designed for research in artificial intelligence. Lisp was notable for its flexibility and its ability to manipulate symbols, making it a fundamental tool for developing expert systems and research in machine learning. Lisp, which stands for LISt Processor, is a high-level programming language with a long history and a significant influence on the development of computing. Developed in the late 1950s by John McCarthy, Lisp is known for its conceptual simplicity and flexibility, making it a powerful tool for programming and, especially, for artificial intelligence research.

The key characteristics of Lisp can be summarized as follows:

  • Homoiconic syntax: One of Lisp’s most distinctive features is its homoiconic syntax. This means that Lisp code is itself a data structure, allowing great flexibility in manipulating the code.
  • List processing: As its name suggests, Lisp is designed to work with lists. Lists are the fundamental data structure in Lisp and are used to represent both data and code.
  • First-class functions: Functions in Lisp are treated as any other data. They can be assigned to variables, passed as arguments to other functions, and returned as values.
  • Macros: Lisp offers a powerful macro system that allows programmers to extend the language and create new syntactic constructs.
  • Multiparadigm: Lisp is a multiparadigm language, meaning it supports different programming styles, such as functional programming, imperative programming, and object-oriented programming.
  • Influence on other languages: Lisp has influenced the design of many other programming languages, such as Python, Scheme, Clojure, and JavaScript.
  • Used in artificial intelligence: Lisp was one of the first languages used for research in artificial intelligence and remains popular in this field.
  • Metaprogramming: The homoiconic syntax of Lisp facilitates metaprogramming, i.e., the ability to write programs that manipulate other programs.
  • Flexibility: Lisp is a very flexible language that allows programmers to express ideas concisely and elegantly.

Today, Lisp is used in artificial intelligence research, machine learning, and natural language processing. It is also used to develop general-purpose software, some web applications, and embedded systems. Additionally, it is used as a teaching language in universities and programming schools due to its simplicity and expressive power.

In summary, Lisp is a programming language with a long history and significant influence on the development of computing. Its homoiconic syntax, focus on list processing, and flexibility make it a powerful tool for programming and research. Although it may seem like an old language, Lisp remains relevant today and continues to inspire new programmers.

McCarthy introduced fundamental concepts for the development of artificial intelligence, such as heuristics, search, and expert systems. These ideas laid the foundation for much of the subsequent research in the field.

In another of his contributions, he helped establish the MIT Artificial Intelligence Laboratory: Along with Marvin Minsky, McCarthy founded the MIT Artificial Intelligence Laboratory, one of the most important research centers in the field.

The MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) is one of the most prestigious and important research labs in the world in the fields of computer science and artificial intelligence. CSAIL is the result of the merger of two pre-existing labs at MIT:

  • Laboratory for Computer Science: Founded in 1963, it focused on fundamental research in computer science.
  • Artificial Intelligence Laboratory: Founded in 1959, it was dedicated to pioneering research in artificial intelligence.

In 2003, both labs merged to form CSAIL, creating an even larger and more powerful research center. Thus, CSAIL is the product of the work of numerous researchers, scientists, and visionaries over several decades.

Its current major research areas include:

  • Artificial Intelligence: development of machine learning algorithms, computer vision, natural language processing, and expert systems.
  • Robotics: design and construction of autonomous robots and intelligent control systems.
  • Computational Biology: application of computational techniques to analyze biological data and develop new therapies.
  • Cybersecurity: development of secure systems and protocols to protect information and critical infrastructures.
  • Human-Computer Interaction: design of intuitive and natural interfaces for interacting with computers.

The work done at CSAIL has had a significant impact on society and industry. Some of the most notable achievements include the development of key technologies such as the Internet, the World Wide Web, and natural language processing; innovation in robotics, and advances in artificial intelligence, including pioneering deep learning algorithms and contributions to the creation of virtual assistants and autonomous vehicles.

McCarthy not only focused on the technical aspects of artificial intelligence but also reflected on the philosophical and social implications of this technology. He was an advocate for artificial intelligence as a tool to solve real-world problems and improve people’s quality of life.

John McCarthy played a key role in the development of expert systems, one of the first practical applications of artificial intelligence.

Expert systems are computer programs designed to emulate the reasoning of a human expert in a specific domain. These systems use a knowledge base and inference rules to solve problems and make decisions. For example, a medical expert system might diagnose diseases based on a patient’s symptoms and medical history.

Although McCarthy did not develop the first expert system, his ideas and contributions were fundamental to the development of this technology. His focus on knowledge representation and logical reasoning provided a solid foundation for the creation of these systems.

McCarthy emphasized the importance of representing knowledge in a formal and structured way. This idea was crucial for the creation of knowledge bases used in expert systems. McCarthy and his colleagues developed rule-based reasoning techniques, which allow expert systems to draw conclusions from a set of facts and rules. The Lisp language, which was widely used to develop expert systems due to its ability to manipulate symbols, played a key role.

McCarthy’s ideas on knowledge representation and logical reasoning remain relevant in the development of intelligent systems. Although expert systems have evolved significantly since the early days, the fundamental principles established by McCarthy are still valid.

John McCarthy passed away in 2011, leaving an indelible legacy in the field of artificial intelligence. His ideas and contributions continue to inspire researchers around the world. Throughout his career, he received numerous recognitions, including the Turing Award, considered the Nobel Prize of computing.

In summary, John McCarthy was a visionary who transformed the way we think about intelligence and machines. His passion for logic, his ability to create powerful tools, and his forward-looking vision laid the groundwork for the development of modern artificial intelligence.

Although McCarthy did not exclusively focus on robotics, his ideas and contributions were foundational for the development of this discipline. His approach to knowledge representation, planning, and reasoning provided a solid foundation for the creation of intelligent robots.

Task planning: Planning techniques developed in the context of artificial intelligence, influenced by McCarthy’s work, were applied to robotics to allow robots to plan and execute complex action sequences. For example, an industrial robot can plan the best path to move a part from one point to another while avoiding obstacles.

Computer vision: The development of computer vision systems, necessary for robots to perceive their environment, benefited from research on knowledge representation and image processing. McCarthy and his colleagues contributed to laying the groundwork for robots to “see” and understand the world around them.

Robot learning: McCarthy’s ideas about machine learning inspired the development of algorithms that allow robots to learn from experience and improve their performance. For example, a robot can learn to walk more efficiently through trial and error.

McCarthy provided the conceptual and technical tools necessary for robots to perform increasingly complex tasks and adapt to changing environments.

McCarthy was a visionary who believed in the potential of artificial intelligence to transform the world. His vision was ambitious and spanned from practical applications to philosophical issues.

  • Artificial General Intelligence (AGI): McCarthy was convinced that it was possible to create machines with intelligence comparable to human intelligence, what we now know as AGI. He believed that AGI could solve some of humanity’s most important problems, such as poverty, disease, and climate change.
  • Superintelligence: Although he did not use the term “superintelligence,” McCarthy foresaw the possibility that machines could surpass human intelligence in many areas. He expressed both enthusiasm and concern about this possibility, emphasizing the importance of developing AI systems that are safe and beneficial for humanity.
  • Practical applications: He was also interested in the practical applications of artificial intelligence. He envisioned a future in which intelligent systems would assist people in a wide range of tasks, from healthcare to education.

In summary, McCarthy’s vision for the future of artificial intelligence was optimistic and ambitious. He believed that AI would have a profound impact on society and that it was crucial to develop this technology responsibly and ethically.

Pioneers of Artificial Intelligence Joshua Bengio 9

Pioneers of Artificial Intelligence Joshua Bengio 9

Yoshua Bengio is a fundamental figure in the field of deep learning, and his contributions have been crucial to the development of this technology.

Bengio showed a great passion for computer science and mathematics from a young age. His interest focused on understanding how the human mind works and whether it was possible to replicate some of these capabilities in machines. He studied at McGill University, where he obtained his Ph.D. in computer science. During his studies, he became deeply interested in artificial neural networks, a technology that at the time was considered to have little promise. Bengio was inspired by the work of pioneering researchers such as Geoffrey Hinton and David Rumelhart. These researchers laid the foundations for deep learning and provided Bengio with a clear vision of the potential of this technology. Yoshua Bengio is one of the main drivers of the field of deep learning. His research on RNNs (Recurrent Neural Networks) and representational learning has had a profound impact on the development of AI. Bengio is a visionary who believes AI has the potential to transform the world but is also aware of the challenges and risks this technology poses.

Bengio made significant contributions, particularly in the field of deep learning:

  • Recurrent Neural Networks (RNNs): Bengio is globally recognized for his contributions to the development of RNNs. These networks are ideal for processing sequences of data, such as text or time series, and have revolutionized the field of natural language processing.
  • Representational Learning: He has made important advances in representational learning, which seeks to find internal representations of data that allow machines to learn more complex tasks.
  • Founding of MILA: Bengio founded the Montreal Institute for Learning Algorithms (MILA), which has become one of the world’s most important AI research centers. MILA (Institut Québécois d’Intelligence Artificielle) is a research institute led by Yoshua Bengio that is highly influential in the world of deep learning. MILA is dedicated to basic research in artificial intelligence, aiming to understand the fundamental principles behind learning and intelligence. MILA’s research has led to numerous practical applications in fields such as computer vision, natural language processing, and medicine. The main features of the institute include:
  • Emphasis on local talent: MILA has been key in developing an AI ecosystem in Montreal, attracting talent from around the world and training a new generation of researchers.
  • Close collaboration with industry: MILA works closely with companies like Google DeepMind and Element AI, enabling the translation of research advances into commercial products and services.
  • Commitment to society: MILA is concerned with the social implications of AI and works to ensure that this technology is developed ethically and responsibly.
  • Development of deep learning algorithms: MILA has developed innovative algorithms to train larger and deeper neural networks, significantly improving performance in tasks such as image recognition and natural language processing.
  • Applications in healthcare: MILA researchers are working on AI tools to diagnose diseases, analyze medical images, and personalize treatments.
  • AI for social good: MILA also investigates how AI can be used to address major social challenges, such as climate change and inequality.

RNNs, thanks to their ability to process sequences, have been essential in the development of advanced language models. These networks have enabled:

  • Machine translation: RNN-based models have significantly improved the quality of machine translation.
  • Text generation: RNNs can generate coherent and creative text, such as poems or programming code.
  • Sentiment analysis: They can analyze the sentiment of a text, identifying whether it is positive, negative, or neutral.
  • Chatbots and virtual assistants: RNNs form the foundation of many chatbots and virtual assistants, enabling them to maintain coherent and meaningful conversations.

Bengio is optimistic about the future of AI but is also aware of the challenges and risks it poses. His main concerns include:

  • Algorithmic biases: AI can perpetuate and amplify biases present in training data.
  • Privacy: The collection and use of large amounts of personal data raises significant privacy concerns.
  • Unemployment: The automation of tasks could lead to job loss and increased inequality.

Despite these challenges, Bengio believes AI can be a force for good, helping us solve some of the world’s biggest problems, such as diseases and climate change.

Bengio continues to work on developing new deep learning techniques and applying these techniques to real-world problems. He is currently focused on:

Ethics in AI: He is actively engaged in discussions about the ethical implications of AI and its societal impact.: Bengio es un defensor de la ética en la IA y trabaja para garantizar que esta tecnología se desarrolle de manera responsable y beneficiosa para la humanidad.

Self-supervised learning: Bengio believes that self-supervised learning is key to developing more general and capable AI systems.

Artificial General Intelligence (AGI): He is interested in the development of AGI, which refers to AI with cognitive abilities similar to those of humans.

Pioneers of Artificial Intelligence Yan LeCun 8

Pioneers of Artificial Intelligence Yan LeCun 8

Yann LeCun, a name synonymous with the deep learning revolution, has had an academic and professional career marked by innate curiosity and a clear vision of the potential of artificial intelligence. LeCun demonstrated from an early age a great interest in technology, building his own circuits and exploring the world of programming. He studied at Sorbonne University and ESIEE Paris, where he acquired a solid foundation in mathematics, computer science, and electronics.

He obtained his Ph.D. from Pierre and Marie Curie University, where he began developing his first research in neural networks and pattern recognition. His early work focused on developing algorithms for optical character recognition (OCR), a technology that has found numerous applications in daily life.

Academic Influences:

LeCun has always acknowledged the significant academic influences that inspired his research while also guiding its specific goals. He often cites Kunihiko Fukushima as a major influence. Fukushima’s work on neocognitron neural networks, designed to recognize visual patterns, was fundamental to the development of CNNs (which we will analyze later). LeCun took many of Fukushima’s ideas and adapted them to create modern CNNs.

A second major influence was David Marr. Marr’s approach to computer vision, which sought to understand how the brain processes visual information, was also an important influence on LeCun. Marr proposed a hierarchy of visual processing levels, from the lowest levels (edge detection) to the highest levels (object recognition), and this idea is reflected in the architecture of CNNs.

The Discovery of Convolutional Neural Networks (CNNs):

LeCun was inspired by the structure of the human brain to develop Convolutional Neural Networks (CNNs). These networks are designed to process visual data efficiently, mimicking the way the human brain processes visual information. His early work with CNNs focused on handwritten document recognition and image classification. These advancements laid the foundation for modern computer vision applications such as facial recognition and object detection.

Challenges in Developing CNNs:

In the early days of deep learning, computational power was limited. Training deep neural networks required a lot of time and computational resources. LeCun and other researchers had to develop efficient algorithms and use specialized hardware to train their models.

Another major challenge was the lack of large labeled datasets. To train a deep neural network, vast amounts of labeled training data are needed. LeCun and his colleagues had to create their own datasets, which required considerable time and effort.

Overfitting is a common problem in machine learning, where the model fits too closely to the training data and doesn’t generalize well to new data. LeCun and other researchers developed techniques to avoid overfitting, such as regularization and cross-validation.

Early Applications of LeCun’s Research:

The first applications of CNNs developed by LeCun focused on pattern recognition in images. Some of the most notable applications include:

  • Optical Character Recognition (OCR): LeCun and his team developed OCR systems capable of recognizing both handwritten and machine-printed text.
  • Image Classification: CNNs were used for classifying images into different categories, such as faces, objects, and scenes.
  • Image Compression: LeCun also explored the use of CNNs for image compression.

While convolutional neural networks are one of LeCun’s most well-known contributions, his work spans a much broader range of topics within artificial intelligence. Some of his other interests and contributions include:

  • Self-supervised learning: LeCun has been a strong advocate of self-supervised learning, a technique that allows machines to learn useful representations of data without the need for human labels. This technique is crucial for the development of more general and capable artificial intelligence systems.
  • Prediction: LeCun has explored the idea of using generative models to predict the future. This line of research could have applications in areas like robotics and planning.

Contribuciones Clave al Aprendizaje Profundo y Papel en Facebook AI Research:

Yann LeCun es una de las figuras más influyentes en el campo del aprendizaje profundo. Su trabajo en redes neuronales convolucionales (CNNs) es fundamental para comprender muchos avances modernos en inteligencia artificial, especialmente en tareas de visión por computadora. Entre sus logros más destacados se incluyen:

  1. LeNet-5: Una de las primeras redes neuronales convolucionales exitosas que revolucionaron el campo del reconocimiento de patrones, particularmente en la reconocimiento de dígitos escritos a mano. LeNet-5 fue un precursor de muchas aplicaciones de visión por computadora que usamos hoy.
  2. Algoritmos de Aprendizaje Eficientes: LeCun también ha trabajado en la mejora de algoritmos para hacer que las redes neuronales sean más eficientes. Esto incluye el uso de retropropagación para entrenar redes profundas y el desarrollo de optimizadores basados en gradientes.
  3. Modelos de Lenguaje en Facebook AI Research (FAIR): En su rol en Facebook AI Research, LeCun ha liderado la creación de modelos de lenguaje a gran escala, como los basados en transformers, que son esenciales para tareas como traducción automática, comprensión del lenguaje natural, y generación de texto.
  4. Visión por Computadora: Además de trabajar en CNNs, FAIR ha estado a la vanguardia de la segmentación de imágenes y la detección de objetos, áreas clave en aplicaciones como vehículos autónomos, sistemas de vigilancia, y diagnósticos médicos.
  5. Inteligencia Artificial General (IAG): LeCun es un defensor de la investigación para crear una inteligencia artificial más general, con la capacidad de realizar una amplia gama de tareas como un ser humano, un área que aún está en fases incipientes de desarrollo.

El Rol de Facebook AI Research (FAIR)

Bajo la dirección de LeCun, FAIR ha emergido como un centro de excelencia en IA, realizando innovaciones clave en varios campos:

  • Visión por Computadora: Los avances en segmentación de imágenes y detección de objetos son vitales para la automatización de procesos industriales y la mejora de la medicina de precisión.
  • Procesamiento del Lenguaje Natural (NLP): A través de modelos avanzados, FAIR ha impactado profundamente la forma en que interactuamos con la tecnología, desde asistentes virtuales hasta sistemas de traducción y búsqueda de información.
  • Aprendizaje Reforzado: FAIR también ha hecho avances en aprendizaje reforzado, una técnica que permite a los sistemas aprender a tomar decisiones autónomamente para maximizar una recompensa, un campo crucial para aplicaciones en robótica y vehículos autónomos.

Desafíos y Visión de la IA de LeCun

LeCun ha identificado varios desafíos fundamentales que la IA debe abordar en el futuro, especialmente para alcanzar el nivel de inteligencia general (IAG) y hacer un uso ético y seguro de la tecnología:

  1. Inteligencia Común: LeCun sostiene que los sistemas de IA actuales son muy especializados. Para que las máquinas puedan hacer todo lo que un humano puede hacer, se necesita un salto hacia la creación de sistemas más generales, capaces de aprender de manera más flexible.
  2. Consciencia y Comprensión: Aunque la IA ha avanzado notablemente, LeCun es escéptico sobre la creación de máquinas verdaderamente conscientes que entiendan el mundo de la misma manera que los humanos.
  3. Ética y Seguridad: Como muchos expertos en IA, LeCun es consciente de los riesgos éticos y de seguridad que conlleva la tecnología. En este sentido, se ha pronunciado sobre la importancia de normas éticas en el desarrollo y el uso responsable de los sistemas de IA.

Desafíos Éticos en la IA

LeCun ha destacado varios retos éticos que la sociedad debe enfrentar a medida que la IA continúa evolucionando:

  • Sesgos Algorítmicos: Los sistemas de IA pueden aprender sesgos presentes en los datos de entrenamiento, lo que puede llevar a decisiones injustas o discriminatorias.
  • Privacidad: La recopilación de grandes cantidades de datos personales plantea serias preocupaciones sobre la privacidad de los individuos.
  • Autonomía de las Máquinas: Con el creciente grado de autonomía de las máquinas, surge la pregunta de quién es responsable si una máquina toma decisiones perjudiciales.
  • Desempleo: La automatización impulsada por la IA puede llevar a la pérdida de empleos, creando disparidades económicas si no se gestionan adecuadamente.

LeCun ha propuesto varias soluciones, incluyendo la transparencia en los algoritmos y la auditoría de sistemas de IA, así como la educación para que la sociedad pueda comprender mejor los beneficios y riesgos de la IA.


Aplicaciones Comerciales de las Tecnologías de LeCun

Las tecnologías que LeCun y su equipo han desarrollado en Facebook AI Research y otros laboratorios tienen aplicaciones comerciales significativas:

  • Reconocimiento de Imágenes: Desde clasificación de productos en tiendas en línea hasta la detección de objetos en imágenes médicas, las CNNs de LeCun tienen un impacto directo en sectores como la salud, el comercio electrónico y la seguridad.
  • Procesamiento de Lenguaje Natural: Los modelos de lenguaje de gran escala de FAIR se emplean en aplicaciones que van desde chatbots hasta sistemas de traducción automática, mejorando la interacción humano-máquina.
  • Recomendación de Productos: Las tecnologías de aprendizaje automático también se aplican en la personalización de productos en plataformas de comercio electrónico, mejorando la experiencia del usuario.
  • Publicidad Digital: La IA optimiza las campañas publicitarias, ayudando a mostrar anuncios más relevantes y dirigidos a los usuarios correctos.

En Resumen:

Yann LeCun ha sido una de las personalidades más influyentes en el mundo de la inteligencia artificial moderna. Su trabajo, particularmente en el desarrollo de redes neuronales convolucionales (CNNs), ha revolucionado áreas clave de la IA, como visión por computadora y procesamiento del lenguaje natural. Además, su liderazgo en Facebook AI Research ha llevado a avances significativos en aprendizaje profundo, aprendizaje reforzado y modelos de lenguaje de gran escala.

A pesar de sus logros, LeCun sigue siendo consciente de los desafíos éticos, sociales y técnicos que la inteligencia artificial enfrenta, y ha enfatizado la importancia de un desarrollo responsable y la creación de sistemas más generales y éticos para el futuro.

error: Content is protected !!