Series The Pioneers of Artificial Intelligence 11

Artificial intelligence

October 31, 2024

31 Oct, 2024

The American John Hopfield and the British Geoffrey Hinton were distinguished for their advancements in artificial neural networks, a computational structure inspired by the functioning of the brain.

The Royal Swedish Academy of Sciences awarded the 2024 Nobel Prize in Physics to American John Hopfield and British Geoffrey Hinton for their fundamental contributions to the development of machine learning, considered a key tool for Artificial Intelligence (AI) as we know it today.

Hopfield was born in 1933 in Chicago and conducts his research at Princeton University, USA. Hinton was born in 1947 in London and is a researcher at the University of Toronto, Canada.

In presenting the laureates, the Nobel committee highlighted that “although computers cannot think, machines can now imitate functions such as memory and learning. This year’s Nobel laureates in Physics have contributed to making this possible.”

Using principles of physics, both scientists achieved key breakthroughs that laid the foundations for artificial neural networks, a computational structure inspired by the functioning of the brain. This discovery not only changed the way machines process and store information, but it was also crucial for the development of modern Artificial Intelligence (AI), particularly in deep learning.

The work of Hopfield, from Princeton University, and Hinton, from the University of Toronto, is deeply related to concepts from physics and biology. Although today we associate machine learning with computers and algorithms, the first steps toward the creation of artificial neural networks stemmed from the desire to understand how the human brain works and processes information. Hopfield, a theoretical physicist, played a decisive role in applying physical concepts to neuroscience to explain how the brain can store and retrieve information.

In 1982, he developed the Hopfield network, an artificial neural network model that can store patterns of information and later retrieve them even when they are incomplete or altered. This concept, known as associative memory, mimics the human ability to recall, for example, a word that is on the tip of the tongue, processing other nearby meanings until the correct one is found.

Hopfield applied physical knowledge, such as the principles governing atomic spin systems, to create his network. In physics, spin is a property of subatomic particles that generates a magnetic field. Inspired by this behavior, Hopfield designed a system in which neurons, or nodes, were interconnected with varying intensities, similar to how the atoms in a magnetic material influence the directions of their neighboring spins.

This approach allowed the network to efficiently associate and reconstruct patterns, a revolutionary idea that marked the beginning of a new era in neural computation.

Inspired by neuroscience, Hopfield designed a model that reproduces even incomplete patterns, applying physical principles similar to the behavior of magnetic materials (Illustrative Image Infobae).

The Hopfield network represents a significant advance because it is based on a system capable of storing multiple patterns simultaneously. When an incomplete pattern is presented to it, the network can find the closest one among those it has already memorized and reconstructed. This process resembles rolling a ball across a landscape of peaks and valleys: if the ball is dropped near a valley (pattern), it will roll to the bottom, where it will find the closest pattern.

In technical terms, the network is programmed with a black-and-white image by assigning binary values to each node (0 for black, 1 for white). Then, an energy formula is used to adjust the connections between the nodes, allowing the network to reduce the system’s total energy and eventually reach a stable state where the original pattern has been recreated. This approach was not only novel but also proved to be scalable: the network could store and differentiate multiple images, opening the door to a form of distributed information storage that would later inspire advancements in artificial intelligence.

While Hopfield developed his network, Geoffrey Hinton explored how machines could learn to process patterns similarly to humans, finding their own categories without the need for explicit instructions.

Hinton pioneered the Boltzmann machine, a type of neural network that uses principles of statistical physics to discover structures in large amounts of data.

Statistical physics deals with systems made up of many similar elements, like the molecules of a gas, whose individual states are unpredictable but can be collectively analyzed to determine properties like pressure and temperature. Hinton leveraged these concepts to design a machine that could analyze the probability of a specific set of connections in a network occurring, based on the overall energy of the network. Inspired by Ludwig Boltzmann’s equation, Hinton used this formula to calculate the probability of different configurations within the network.

The Boltzmann machine has two types of nodes: visible and hidden. The former receive the initial information, while the hidden nodes generate patterns from that information, adjusting the network’s connections so that the trained examples are most likely to occur. In this way, the machine learns from examples, not instructions, and can recognize patterns even when the information is new but resembles previously seen examples.

The work of Hopfield and Hinton not only revitalized interest in neural networks, but also paved the way for the development of deep learning, a branch of AI that today drives much of the technological innovations, from virtual assistants to autonomous vehicles.

Deep neural networks, which are models with many layers of neurons, owe their existence to these early breakthroughs in artificial neural networks.

Today, neural networks are essential tools for analyzing vast amounts of data, identifying complex patterns in images and sounds, and improving decision-making in fields ranging from medicine to astrophysics.

For example, in particle physics, artificial neural networks were key in discovering the Higgs boson, an achievement awarded the Nobel Prize in Physics in 2013. Similarly, machine learning has helped improve the detection of gravitational waves, another recent scientific milestone.

Thanks to the discoveries of Hopfield and Hinton, AI continues to evolve at a rapid pace. In the field of molecular biology, for instance, neural networks are used to predict protein structures, which has direct implications in drug development. Additionally, in renewable energy, networks are being used to design materials with better properties for more efficient solar cells.

Autor: Research Team from the Laboratory of the Future

Autor: Research Team from the Laboratory of the Future

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!