by Research Team from the Laboratory of the Future | Dec 3, 2024 | News
Press Release FROM THE EUROPEAN COMMISSION. November 13, 2024. Brussels
The European Commission has fined Meta €797.72 million for violating EU antitrust rules by linking its online classified ads service, Facebook Marketplace, to its personal social network Facebook and imposing unfair commercial conditions on other online classified ads service providers.
The infringement:
Meta is a US multinational technology company whose flagship product is its personal social network, Facebook. It also offers an online classified ads service called “Facebook Marketplace,” where users can buy and sell products.
The Commission’s investigation concluded that Meta is dominant in the personal social network market, which covers at least the European Economic Area (EEA), as well as in national online advertising markets for social networks.
In particular, the Commission concluded that Meta abused its dominant position in violation of Article 102 of the Treaty on the Functioning of the European Union (TFEU) by:
- Linking its online classified ads service, Facebook Marketplace, with its personal social network, Facebook, means that all Facebook users have automatic access and are regularly exposed to Facebook Marketplace, whether they want to or not. The Commission concluded that competitors of Facebook Marketplace could be excluded, as this link gives Facebook Marketplace a substantial distribution advantage that competitors cannot match.
- Unilaterally imposing unfair commercial conditions on other online classified ads service providers who advertise on Meta’s platforms, particularly on its popular social networks Facebook and Instagram. This allows Meta to use data related to ads generated by other advertisers for the exclusive benefit of Facebook Marketplace.
The Commission has ordered Meta to effectively end the conduct and refrain from repeating the infringement or adopting practices with an equivalent object or effect in the future.
The fine of €797.72 million has been set based on the Commission’s 2006 guidelines on fines.
To determine the amount of the fine, the Commission took into account the duration and severity of the infringement, as well as the turnover of Facebook Marketplace, to which the infringements refer and which, therefore, defines the basic amount of the fine. Additionally, the Commission considered Meta’s total turnover to ensure a sufficient deterrent effect for a company with such significant resources as Meta.
Grounds:
In June 2021, the Commission opened a formal procedure for possible anticompetitive conduct by Facebook. In December 2022, the Commission sent Meta a statement of objections, to which Meta responded in June 2023.
Article 102 of the TFEU and Article 54 of the EEA Agreement prohibit the abuse of a dominant position.
A dominant market position is not, in itself, illegal under EU antitrust rules. However, dominant companies have a special responsibility not to abuse their powerful position in the market by restricting competition, either in the market in which they are dominant or in separate markets.
Fines imposed on companies that violate EU antitrust rules are paid into the EU’s general budget. These revenues are not allocated to specific expenses, but the contributions from Member States to the EU budget for the following year are reduced accordingly. Therefore, fines contribute to financing the EU and reducing the burden on taxpayers.
More information on this case will be available under case number AT.40684 in the public case register on the Commission’s competition website, once confidentiality issues have been resolved.
Quotes:
“Today, we fine Meta €797.72 million for abusing its dominant position in the personal social network services and online advertising markets on social media platforms. Meta linked its online classified ads service, Facebook Marketplace, to its personal social network, Facebook, and imposed unfair commercial conditions on other online classified ads service providers. It did this to benefit its own Facebook Marketplace service, which gave it advantages that other online classified ads service providers could not match. This is illegal under EU antitrust rules. Meta must now end this conduct.”
Margrethe Vestager, Executive Vice President.ecutiva encargada de la Política de Competencia
by Research Team from the Laboratory of the Future | Nov 9, 2024 | Artificial intelligence
Judea Pearl
“Robots will talk to each other, they will have their own will, desires… I don’t know what surprises you about this.”
It has revolutionized artificial intelligence and is now ready to revolutionize our lives. This computational engineer and philosopher has laid the mathematical foundations for robots to think and feel like humans, not just accumulate data. For his discoveries, he has just received the BBVA Frontiers of Knowledge Award.
Author: Ana Tagarro. ABC Madrid.
He has an impressive resume. The Turing Award – the Nobel of mathematics – a PhD in Engineering, a Master’s in Physics, awards in Psychology, Statistics, and Philosophy, and now, the BBVA Foundation Frontiers of Knowledge Award in Communication Technologies. And, if that wasn’t enough, he’s a gifted pianist. Judea Pearl, however, prefers to define himself as a poet. After all, he makes metaphors with equations. In the 1980s, he developed a mathematical language, Bayesian networks, which are essential in every computer today. But now, at 87 years old, he declares himself an ‘apostate’ of artificial intelligence. Why? Well, precisely because of that why. It’s not a wordplay. Pearl states that unless we teach machines to understand cause-and-effect relationships, in their very complex variations, they won’t think like us. And he knows how to achieve it. He explains it to us from his home in Los Angeles. There, at the University of California, he is still a professor. As lucid as the young Israeli, trained in a small biblical town, who arrived in sunny California 60 years ago.
XL: Your goal is to build machines with human-level intelligence, that think like us.
Judea Pearl: Yes, because until now we haven’t made machines that ‘think’. They only simulate some aspects of human thinking. “Between humans and machines, only the ‘hardware’ is different; the ‘software’ is the same. Perhaps there could be one difference: the fear of death. But I don’t know…”
XL: And to make machines think, you claim they must think in terms of causes and effects, asking themselves ‘why’.
J.P.: Yes, but there are levels. It’s what we call ‘the ladder of causality’. Current machines only create associations between what was observed before and what will be observed in the future. This is what allows eagles or snakes to hunt their prey. They know where the mouse will be in five seconds.
XL: But that’s not enough…
J.P.: No. There are two levels above that ladder that machines don’t do. One is predicting actions that have never been carried out before under the same conditions.
XL: But there’s more…
J.P.: The next step is retrospection. For example: I took an aspirin, and my headache went away. Did the aspirin relieve the pain, or was it the good news my wife gave me when I took it? Thinking in this line: would an event have taken place if another event in the past had not occurred? For now, this is something only humans do.
The ladder of artificial intelligence. The ultimate leap of machines:
XL: Because until now, this way of thinking couldn’t be translated into mathematical formulas, but now it can, thanks to you…
J.P.: Yes, now we have mathematical tools that allow us to reason in all three levels. It’s just a matter of applying them to artificial intelligence.
XL: Let me clarify what you said; does that mean you translate imagination, responsibility, and even guilt into equations?
J.P.: Yes, correct.
XL: Correct and mind-blowing, right? Robots will be able to imagine things that don’t exist. And you yourself say that this capacity has been key to the human domination over other species. Will machines now do it?
J.P.: Correct, totally. Humans created this ‘market of promises,’ convincing someone to do something in exchange for a promise of the future. And machines will be able to do that.
“We create robots for the same reason we have children. To replicate ourselves. And we raise them in the hope that they will have our values. And most of the time, it works out well.”
XL: You confidently claim, for example, that robots will play soccer and say things like, “You should have passed me the ball earlier.”
J.P.: Yes, of course, and soccer will be much better. Robots will communicate like humans. They will have their own will, desires… I’m surprised that this surprises you [laughs].
XL: What surprises me is how naturally you speak about these ‘human’ machines…
J.P.: Look, I’ve been in artificial intelligence for over 50 years. I grew up with the clear idea that anything we can do, machines will be able to do. I don’t see any obstacle, none.
XL: But then, what sets us apart from machines?
J.P.: That we are made of organic matter and machines are made of silicon. The hardware is different, but the software is the same.
“Artificial intelligence has the potential to be terrifying and the potential to be extremely convenient. For now, it’s just ‘new.’ It’s too early to legislate.”
XL: Not much difference…
J.P.: There might be one difference: the fear of death. But I’m not sure that makes a big difference, maybe.
XL: And falling in love?
J.P.: Machines can fall in love. Marvin Minsky has an entire book on the emotions of machines, The Emotion Machine, it’s from years ago…
XL: That’s a bit scary…
J.P.: It’s not scary; it’s just new. It has the potential to be terrifying and the potential to be extremely convenient. For now, it’s just ‘new.’
XL: Will machines be able to distinguish between right and wrong?
J.P.: Yes, with the same reliability as humans; maybe even more. The analogy I like is that of our children. We believe they will think like us; we raise them in the hope that we will instill our values in them. And still, there is a risk that my child could turn out to be another Putin. But we all go through the process of raising our children hoping they will acquire our values. And it usually works out well…
XL: But is anyone working on the ethical and moral foundations of this artificial intelligence?
J.P.: A lot of people, yes. But I think it’s too early to legislate.
XL: I would say it’s late…
J.P.: We have a new kind of machine. We need to observe it because we still don’t know how it will evolve. And we can’t legislate out of fear, from unfounded fears.
XL: But you yourself mention that the creators of a highly successful artificial intelligence, AlphaGo from DeepMind, don’t know why it’s so effective, that they don’t ‘control’ their creation…
J.P.: Correct. But look, we don’t know how the human mind works either. We don’t know how our children will develop their minds, and still, we trust them. And do you know why? Because they work like us. And we think: they probably think like I do. And that’s what will happen with machines.
XL: But then children turn out how they want… Although you defend that free will is “an illusion.” And here we were thinking we made choices! What a disappointment…
J.P.: For you, it’s a disappointment, for me it’s a great comfort. Since Aristotle and Maimonides, philosophers have been trying to reconcile the idea of God with free will. A God who predicts the future, who knows what’s good and what’s bad, and yet punishes us for doing things He’s programmed us to do. This is a terrible ethical problem that we couldn’t solve.
XL: And you’re going to solve it with artificial intelligence?
J.P.: Of course, because the first premise is that there is no free will. We have the illusion that we are in control when we make decisions, but that’s not the case. The decision has already been made in the brain. It’s our neurons that tell us how to act, the ones that, through excitement or nervousness, make me move my hand or scratch my nose. It’s deterministic, and there’s no divine force behind it.
“We’ll have implants, and they will interact with those of other people. It’s scary, huh? [laughs]. But we all already have implants: they’re called ‘language,’ ‘culture’… we are born with them.”
XL: What can we do to teach or learn mathematics better?
J.P.: Bill Gates asked me the same thing. And I looked at my own education. I was lucky to have excellent teachers, German Jews who came to Tel Aviv fleeing the Nazi regime. They taught science and mathematics chronologically, not logically. When they told us about Archimedes, how he jumped out of the bathtub shouting “Eureka, Eureka!”, we got involved. The basis of our intelligence is stories, narratives, because they connect people. Stories make history. It’s easier to implant abstract ideas like mathematics through stories, through narratives.
XL: And what about philosophy, which is now being sidelined in education?
J.P.: It’s terrible. Philosophy is very important. It connects us with at least 80 generations of thinkers. It creates a common language, builds civilization.
XL: But it’s not useful for finding a job… or so they say. And priority is given to engineering, which makes those robots that are precisely going to take our jobs…
J.P.: Yes, that’s already happening. And it will happen more. There are two aspects to this: one is how we’re going to feel useful when we don’t have a job. The other is how we’ll live, how we’ll earn a salary. The second is an economic and management issue. I don’t have a solution for that. But there is one. It will come.
XL: And for the first one?
J.P.: We can solve it. I’m 87 years old, I’m useless, and I find joy every hour of the day.
XL: [Laughs] You are definitely not useless, and you know it.
J.P.: Look, almost everything is illusory. I live with the illusion of my environment’s response, from my children, my students. If I give a lecture, I feel happy because I have the illusion that it benefits someone. It’s possible to create illusions. One creates them for oneself.
“The basis of our intelligence is stories, narratives, because they connect people. Stories make history. It’s easier to implant abstract ideas like mathematics through narratives.”
XL: We were talking earlier about good and evil. You have suffered evil in an unimaginable way, when your son was murdered (see box); now there is a war… Can machines change that, make us better?
J.P.: I don’t have the answer. But maybe, when we implement empathy or remorse in machines, we’ll understand how they form in us and be able to become somewhat better.
XL: And what do you think about incorporating technology into our bodies? Becoming transhuman…
J.P.: I see no obstacle to that. We’ll have implants, and they’ll interact with implants from other people or other agents.
XL: Would you like to have an implant in your brain?
J.P.: Scary, huh? [laughs]. I already have an implant. We all do: they’re called ‘language,’ ‘culture’… we are born with them. But since we’re used to them, they don’t surprise us.
XL: But why do you insist on making machines smarter than us?
J.P.: Because we’re trying to replicate and amplify ourselves.
XL: For what?
J.P.: For the same reason we have children.
XL: I get the analogy, but we used to create machines to help us; now they’re replacing us.
J.P.: No, no. We create machines to help us. They will replace us, yes. But we created them to help us [laughs]. Though they’ll surpass us.
XL: Is there a mathematical formula for justice?
J.P.: There has to be. That way, there would be no ambiguity, and no dictator could tell us what is just. To fight a Putin, we’d need more mathematics.
“I don’t make predictions, but the future is going to be totally different, a revolution. I’m optimistic, even though I don’t know where it will take us.”
XL: You have a lot of old books.
J.P.: I collect them. I have a first edition of Galileo [he picks it up].
XL: You travel through time. You go from those books to artificial intelligence. I can’t help but ask, even though you told me not to, how do you see the world in 10 or 20 years…
J.P.: [Laughs]. I don’t make predictions. But it’s going to be totally different, a revolution. I don’t know where it will take us, but I’m optimistic. Though it’s sad that my grandchildren won’t enjoy, for example, reading my old books. The cultural gap between generations will grow. And that worries me. Because they’re going to lose all the wisdom we transmitted from parents to children.
XL: And you say that, while making thinking robots!
J.P.: Yes, but I make thinking machines to understand how we think.
XL: What advice would you give to the ‘still salvable’ young people?
J.P.: Read history.
XL: Read? You’re too optimistic…
J.P.: Alright, let them watch documentaries. About civilizations, evolution, how we became who we are. Be curious! That’s my advice: try to understand things for yourselves.
Artificial Intelligence – Interview with Judea Pearl
by Research Team from the Laboratory of the Future | Oct 31, 2024 | Artificial intelligence
The machine learning that laid the foundations of Artificial Intelligence, the discovery of the winners of the Nobel Prize in Physics.
The American John Hopfield and the British Geoffrey Hinton were distinguished for their advancements in artificial neural networks, a computational structure inspired by the functioning of the brain.
The Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to American John Hopfield and British Geoffrey Hinton for their fundamental contributions to the development of machine learning, considered a key tool for Artificial Intelligence (AI) as we know it today.
Hopfield was born in 1933 in Chicago and conducts his research at Princeton University, USA. Hinton was born in 1947 in London and is a researcher at the University of Toronto, Canada.
In presenting the laureates, the Nobel committee highlighted that “although computers cannot think, machines can now imitate functions such as memory and learning. This year’s Nobel laureates in Physics have contributed to making this possible.”
Using principles of physics, both scientists achieved key breakthroughs that laid the foundations for artificial neural networks, a computational structure inspired by the functioning of the brain. This discovery not only changed the way machines process and store information, but it was also crucial for the development of modern Artificial Intelligence (AI), particularly in deep learning.
Understanding the brain to create artificial neural networks:
The work of Hopfield, from Princeton University, and Hinton, from the University of Toronto, is deeply related to concepts from physics and biology. Although today we associate machine learning with computers and algorithms, the first steps toward the creation of artificial neural networks stemmed from the desire to understand how the human brain works and processes information. Hopfield, a theoretical physicist, played a decisive role in applying physical concepts to neuroscience to explain how the brain can store and retrieve information.
In 1982, he developed the Hopfield network, an artificial neural network model that can store patterns of information and later retrieve them even when they are incomplete or altered. This concept, known as associative memory, mimics the human ability to recall, for example, a word that is on the tip of the tongue, processing other nearby meanings until the correct one is found.
Hopfield applied physical knowledge, such as the principles governing atomic spin systems, to create his network. In physics, spin is a property of subatomic particles that generates a magnetic field. Inspired by this behavior, Hopfield designed a system in which neurons, or nodes, were interconnected with varying intensities, similar to how the atoms in a magnetic material influence the directions of their neighboring spins.
This approach allowed the network to efficiently associate and reconstruct patterns, a revolutionary idea that marked the beginning of a new era in neural computation.
Inspired by neuroscience, Hopfield designed a model that reproduces even incomplete patterns, applying physical principles similar to the behavior of magnetic materials (Illustrative Image Infobae).
The Hopfield Network and Associative Memory:
The Hopfield network represents a significant advance because it is based on a system capable of storing multiple patterns simultaneously. When an incomplete pattern is presented to it, the network can find the closest one among those it has already memorized and reconstructed. This process resembles rolling a ball across a landscape of peaks and valleys: if the ball is dropped near a valley (pattern), it will roll to the bottom, where it will find the closest pattern.
In technical terms, the network is programmed with a black-and-white image by assigning binary values to each node (0 for black, 1 for white). Then, an energy formula is used to adjust the connections between the nodes, allowing the network to reduce the system’s total energy and eventually reach a stable state where the original pattern has been recreated. This approach was not only novel but also proved to be scalable: the network could store and differentiate multiple images, opening the door to a form of distributed information storage that would later inspire advancements in artificial intelligence.
Hinton and the Boltzmann Machine:
While Hopfield developed his network, Geoffrey Hinton explored how machines could learn to process patterns similarly to humans, finding their own categories without the need for explicit instructions.
Hinton pioneered the Boltzmann machine, a type of neural network that uses principles of statistical physics to discover structures in large amounts of data.
Statistical physics deals with systems made up of many similar elements, like the molecules of a gas, whose individual states are unpredictable but can be collectively analyzed to determine properties like pressure and temperature. Hinton leveraged these concepts to design a machine that could analyze the probability of a specific set of connections in a network occurring, based on the overall energy of the network. Inspired by Ludwig Boltzmann’s equation, Hinton used this formula to calculate the probability of different configurations within the network.
The Boltzmann machine has two types of nodes: visible and hidden. The former receive the initial information, while the hidden nodes generate patterns from that information, adjusting the network’s connections so that the trained examples are most likely to occur. In this way, the machine learns from examples, not instructions, and can recognize patterns even when the information is new but resembles previously seen examples.
The Foundations of Deep Learning in Artificial Intelligence
The work of Hopfield and Hinton not only revitalized interest in neural networks, but also paved the way for the development of deep learning, a branch of AI that today drives much of the technological innovations, from virtual assistants to autonomous vehicles.
Deep neural networks, which are models with many layers of neurons, owe their existence to these early breakthroughs in artificial neural networks.
Today, neural networks are essential tools for analyzing vast amounts of data, identifying complex patterns in images and sounds, and improving decision-making in fields ranging from medicine to astrophysics.
For example, in particle physics, artificial neural networks were key in discovering the Higgs boson, an achievement awarded the Nobel Prize in Physics in 2013. Similarly, machine learning has helped improve the detection of gravitational waves, another recent scientific milestone.
Thanks to the discoveries of Hopfield and Hinton, AI continues to evolve at a rapid pace. In the field of molecular biology, for instance, neural networks are used to predict protein structures, which has direct implications in drug development. Additionally, in renewable energy, networks are being used to design materials with better properties for more efficient solar cells.
by Research Team from the Laboratory of the Future | Oct 28, 2024 | Artificial intelligence
John McCarthy:
The Father of Artificial Intelligence
John McCarthy, born in 1927 and passed away in October 2011, was an American mathematician and computer scientist who was renowned for coining the term “artificial intelligence” and for his pioneering contributions to the development of this field. John McCarthy’s legacy is immense. His ideas and contributions have influenced generations of researchers in artificial intelligence. McCarthy is considered one of the fathers of artificial intelligence due to his vision, technical contributions, and role in the founding of this field. His legacy continues to inspire researchers around the world in their pursuit of creating intelligent machines.
The Beginnings and Vision:
The Dartmouth Conference: In 1956, McCarthy organized the Dartmouth Conference, a historic event where the leading researchers of the time gathered to discuss the possibility of creating intelligent machines. This conference marked the formal birth of artificial intelligence as a field of study.
What was the Dartmouth Conference?
The Dartmouth Conference was an academic meeting that took place in the summer of 1956 at Dartmouth College in Hanover, New Hampshire. Organized by a group of computer scientists, including John McCarthy, Marvin Minsky, Claude Shannon, and Allen Newell, this conference marked the formal birth of artificial intelligence as a field of study.
The main goal of the conference was to explore the possibility of creating machines capable of performing tasks that had previously been considered exclusive to humans, such as reasoning, learning, and problem-solving. The organizers hypothesized that it was possible to simulate any aspect of learning or any other characteristic of intelligence in a machine.
Although the group of participants was relatively small, it was composed of some of the brightest computer scientists of the time. Among them were:
- John McCarthy: the main organizer and the person who coined the term “artificial intelligence.”
- Marvin Minsky: founder of the MIT Artificial Intelligence Laboratory.
- Claude Shannon: considered the father of information theory.
- Allen Newell and Herbert Simon: pioneers in the field of symbolic artificial intelligence and creators of the Logic Theorist program.
During the six weeks of the conference, the participants discussed a wide range of topics related to artificial intelligence, including:
- Automation of creative processes: how to make machines capable of writing music, composing poetry, or creating works of art.
- Simulation of mental processes: how to model human thinking in a machine.
- Development of programming languages: the need to create programming languages suitable for research in artificial intelligence.
- Machine learning: how to make machines learn from experience.
- Computational neuroscience: the relationship between artificial intelligence and the functioning of the human brain.
This conference had a lasting impact on the development of artificial intelligence. Some of the most important outcomes were:
- The birth of a field: the conference solidified artificial intelligence as an academic and scientific discipline.
- The creation of research laboratories: following the conference, numerous research laboratories in artificial intelligence were founded around the world.
- The development of new programming languages: Lisp, one of the most important programming languages for AI, was developed in the years following the conference.
- Funding for research: the conference generated significant interest in artificial intelligence and attracted substantial investment for research in the field.
In summary, the Dartmouth Conference was a seminal event that laid the foundation for the development of artificial intelligence as we know it today. Thanks to this conference, a group of visionary scientists took the first steps toward creating intelligent machines.
The Term “Artificial Intelligence”: It was at this conference that McCarthy proposed the term “artificial intelligence” to describe the science and engineering of creating intelligent machines.
Key Contributions of McCarthy:
McCarthy developed the Lisp programming language, one of the first languages specifically designed for research in artificial intelligence. Lisp was notable for its flexibility and its ability to manipulate symbols, making it a fundamental tool for developing expert systems and research in machine learning. Lisp, which stands for LISt Processor, is a high-level programming language with a long history and a significant influence on the development of computing. Developed in the late 1950s by John McCarthy, Lisp is known for its conceptual simplicity and flexibility, making it a powerful tool for programming and, especially, for artificial intelligence research.
The key characteristics of Lisp can be summarized as follows:
- Homoiconic syntax: One of Lisp’s most distinctive features is its homoiconic syntax. This means that Lisp code is itself a data structure, allowing great flexibility in manipulating the code.
- List processing: As its name suggests, Lisp is designed to work with lists. Lists are the fundamental data structure in Lisp and are used to represent both data and code.
- First-class functions: Functions in Lisp are treated as any other data. They can be assigned to variables, passed as arguments to other functions, and returned as values.
- Macros: Lisp offers a powerful macro system that allows programmers to extend the language and create new syntactic constructs.
- Multiparadigm: Lisp is a multiparadigm language, meaning it supports different programming styles, such as functional programming, imperative programming, and object-oriented programming.
Why is Lisp important?
- Influence on other languages: Lisp has influenced the design of many other programming languages, such as Python, Scheme, Clojure, and JavaScript.
- Used in artificial intelligence: Lisp was one of the first languages used for research in artificial intelligence and remains popular in this field.
- Metaprogramming: The homoiconic syntax of Lisp facilitates metaprogramming, i.e., the ability to write programs that manipulate other programs.
- Flexibility: Lisp is a very flexible language that allows programmers to express ideas concisely and elegantly.
Today, Lisp is used in artificial intelligence research, machine learning, and natural language processing. It is also used to develop general-purpose software, some web applications, and embedded systems. Additionally, it is used as a teaching language in universities and programming schools due to its simplicity and expressive power.
In summary, Lisp is a programming language with a long history and significant influence on the development of computing. Its homoiconic syntax, focus on list processing, and flexibility make it a powerful tool for programming and research. Although it may seem like an old language, Lisp remains relevant today and continues to inspire new programmers.
McCarthy introduced fundamental concepts for the development of artificial intelligence, such as heuristics, search, and expert systems. These ideas laid the foundation for much of the subsequent research in the field.
In another of his contributions, he helped establish the MIT Artificial Intelligence Laboratory: Along with Marvin Minsky, McCarthy founded the MIT Artificial Intelligence Laboratory, one of the most important research centers in the field.
The MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) is one of the most prestigious and important research labs in the world in the fields of computer science and artificial intelligence. CSAIL is the result of the merger of two pre-existing labs at MIT:
- Laboratory for Computer Science: Founded in 1963, it focused on fundamental research in computer science.
- Artificial Intelligence Laboratory: Founded in 1959, it was dedicated to pioneering research in artificial intelligence.
In 2003, both labs merged to form CSAIL, creating an even larger and more powerful research center. Thus, CSAIL is the product of the work of numerous researchers, scientists, and visionaries over several decades.
Its current major research areas include:
- Artificial Intelligence: development of machine learning algorithms, computer vision, natural language processing, and expert systems.
- Robotics: design and construction of autonomous robots and intelligent control systems.
- Computational Biology: application of computational techniques to analyze biological data and develop new therapies.
- Cybersecurity: development of secure systems and protocols to protect information and critical infrastructures.
- Human-Computer Interaction: design of intuitive and natural interfaces for interacting with computers.
The work done at CSAIL has had a significant impact on society and industry. Some of the most notable achievements include the development of key technologies such as the Internet, the World Wide Web, and natural language processing; innovation in robotics, and advances in artificial intelligence, including pioneering deep learning algorithms and contributions to the creation of virtual assistants and autonomous vehicles.
Beyond the technical:
McCarthy not only focused on the technical aspects of artificial intelligence but also reflected on the philosophical and social implications of this technology. He was an advocate for artificial intelligence as a tool to solve real-world problems and improve people’s quality of life.
John McCarthy played a key role in the development of expert systems, one of the first practical applications of artificial intelligence.
A major push from McCarthy. What are expert systems?
Expert systems are computer programs designed to emulate the reasoning of a human expert in a specific domain. These systems use a knowledge base and inference rules to solve problems and make decisions. For example, a medical expert system might diagnose diseases based on a patient’s symptoms and medical history.
Although McCarthy did not develop the first expert system, his ideas and contributions were fundamental to the development of this technology. His focus on knowledge representation and logical reasoning provided a solid foundation for the creation of these systems.
McCarthy emphasized the importance of representing knowledge in a formal and structured way. This idea was crucial for the creation of knowledge bases used in expert systems. McCarthy and his colleagues developed rule-based reasoning techniques, which allow expert systems to draw conclusions from a set of facts and rules. The Lisp language, which was widely used to develop expert systems due to its ability to manipulate symbols, played a key role.
McCarthy’s ideas on knowledge representation and logical reasoning remain relevant in the development of intelligent systems. Although expert systems have evolved significantly since the early days, the fundamental principles established by McCarthy are still valid.
Legacy and Recognition:
John McCarthy passed away in 2011, leaving an indelible legacy in the field of artificial intelligence. His ideas and contributions continue to inspire researchers around the world. Throughout his career, he received numerous recognitions, including the Turing Award, considered the Nobel Prize of computing.
In summary, John McCarthy was a visionary who transformed the way we think about intelligence and machines. His passion for logic, his ability to create powerful tools, and his forward-looking vision laid the groundwork for the development of modern artificial intelligence.
Although McCarthy did not exclusively focus on robotics, his ideas and contributions were foundational for the development of this discipline. His approach to knowledge representation, planning, and reasoning provided a solid foundation for the creation of intelligent robots.
Task planning: Planning techniques developed in the context of artificial intelligence, influenced by McCarthy’s work, were applied to robotics to allow robots to plan and execute complex action sequences. For example, an industrial robot can plan the best path to move a part from one point to another while avoiding obstacles.
Computer vision: The development of computer vision systems, necessary for robots to perceive their environment, benefited from research on knowledge representation and image processing. McCarthy and his colleagues contributed to laying the groundwork for robots to “see” and understand the world around them.
Robot learning: McCarthy’s ideas about machine learning inspired the development of algorithms that allow robots to learn from experience and improve their performance. For example, a robot can learn to walk more efficiently through trial and error.
McCarthy provided the conceptual and technical tools necessary for robots to perform increasingly complex tasks and adapt to changing environments.
McCarthy’s Vision for the Future of Artificial Intelligence:
McCarthy was a visionary who believed in the potential of artificial intelligence to transform the world. His vision was ambitious and spanned from practical applications to philosophical issues.
- Artificial General Intelligence (AGI): McCarthy was convinced that it was possible to create machines with intelligence comparable to human intelligence, what we now know as AGI. He believed that AGI could solve some of humanity’s most important problems, such as poverty, disease, and climate change.
- Superintelligence: Although he did not use the term “superintelligence,” McCarthy foresaw the possibility that machines could surpass human intelligence in many areas. He expressed both enthusiasm and concern about this possibility, emphasizing the importance of developing AI systems that are safe and beneficial for humanity.
- Practical applications: He was also interested in the practical applications of artificial intelligence. He envisioned a future in which intelligent systems would assist people in a wide range of tasks, from healthcare to education.
In summary, McCarthy’s vision for the future of artificial intelligence was optimistic and ambitious. He believed that AI would have a profound impact on society and that it was crucial to develop this technology responsibly and ethically.
by Research Team from the Laboratory of the Future | Oct 25, 2024 | Artificial intelligence
Yoshua Bengio
One of the leaders in the field of deep learning:
Yoshua Bengio is a fundamental figure in the field of deep learning, and his contributions have been crucial to the development of this technology.
Bengio showed a great passion for computer science and mathematics from a young age. His interest focused on understanding how the human mind works and whether it was possible to replicate some of these capabilities in machines. He studied at McGill University, where he obtained his Ph.D. in computer science. During his studies, he became deeply interested in artificial neural networks, a technology that at the time was considered to have little promise. Bengio was inspired by the work of pioneering researchers such as Geoffrey Hinton and David Rumelhart. These researchers laid the foundations for deep learning and provided Bengio with a clear vision of the potential of this technology. Yoshua Bengio is one of the main drivers of the field of deep learning. His research on RNNs (Recurrent Neural Networks) and representational learning has had a profound impact on the development of AI. Bengio is a visionary who believes AI has the potential to transform the world but is also aware of the challenges and risks this technology poses.
Key Contributions and Impact on Deep Learning:
Bengio made significant contributions, particularly in the field of deep learning:
- Recurrent Neural Networks (RNNs): Bengio is globally recognized for his contributions to the development of RNNs. These networks are ideal for processing sequences of data, such as text or time series, and have revolutionized the field of natural language processing.
- Representational Learning: He has made important advances in representational learning, which seeks to find internal representations of data that allow machines to learn more complex tasks.
- Founding of MILA: Bengio founded the Montreal Institute for Learning Algorithms (MILA), which has become one of the world’s most important AI research centers. MILA (Institut Québécois d’Intelligence Artificielle) is a research institute led by Yoshua Bengio that is highly influential in the world of deep learning. MILA is dedicated to basic research in artificial intelligence, aiming to understand the fundamental principles behind learning and intelligence. MILA’s research has led to numerous practical applications in fields such as computer vision, natural language processing, and medicine. The main features of the institute include:
- Emphasis on local talent: MILA has been key in developing an AI ecosystem in Montreal, attracting talent from around the world and training a new generation of researchers.
- Close collaboration with industry: MILA works closely with companies like Google DeepMind and Element AI, enabling the translation of research advances into commercial products and services.
- Commitment to society: MILA is concerned with the social implications of AI and works to ensure that this technology is developed ethically and responsibly.
Some of MILA’s Most Important Contributions:
- Development of deep learning algorithms: MILA has developed innovative algorithms to train larger and deeper neural networks, significantly improving performance in tasks such as image recognition and natural language processing.
- Applications in healthcare: MILA researchers are working on AI tools to diagnose diseases, analyze medical images, and personalize treatments.
- AI for social good: MILA also investigates how AI can be used to address major social challenges, such as climate change and inequality.
RNNs and Natural Language Processing:
RNNs, thanks to their ability to process sequences, have been essential in the development of advanced language models. These networks have enabled:
- Machine translation: RNN-based models have significantly improved the quality of machine translation.
- Text generation: RNNs can generate coherent and creative text, such as poems or programming code.
- Sentiment analysis: They can analyze the sentiment of a text, identifying whether it is positive, negative, or neutral.
- Chatbots and virtual assistants: RNNs form the foundation of many chatbots and virtual assistants, enabling them to maintain coherent and meaningful conversations.
Bengio and His Vision for the Future of Artificial Intelligence:
Bengio is optimistic about the future of AI but is also aware of the challenges and risks it poses. His main concerns include:
- Algorithmic biases: AI can perpetuate and amplify biases present in training data.
- Privacy: The collection and use of large amounts of personal data raises significant privacy concerns.
- Unemployment: The automation of tasks could lead to job loss and increased inequality.
Despite these challenges, Bengio believes AI can be a force for good, helping us solve some of the world’s biggest problems, such as diseases and climate change.
Current Work:
Bengio continues to work on developing new deep learning techniques and applying these techniques to real-world problems. He is currently focused on:
Ethics in AI: He is actively engaged in discussions about the ethical implications of AI and its societal impact.: Bengio es un defensor de la ética en la IA y trabaja para garantizar que esta tecnología se desarrolle de manera responsable y beneficiosa para la humanidad.
Self-supervised learning: Bengio believes that self-supervised learning is key to developing more general and capable AI systems.
Artificial General Intelligence (AGI): He is interested in the development of AGI, which refers to AI with cognitive abilities similar to those of humans.