by Research Team from the Laboratory of the Future | Oct 3, 2024 | Artificial intelligence
STUART J. RUSSELL.
Stuart J. Russell is one of the most recognized names in the field of artificial intelligence (AI), especially for his work in ethical AI and his vision of AI that benefits humanity without compromising its fundamental values. Born in 1962 in Portsmouth, England, Russell has built a distinguished career at the intersection of AI theory and its practical application, making important contributions to both academic research and the ethical debates surrounding this technology. His rigorous approach and global vision have positioned him as one of the most influential voices in contemporary AI.
Russell completed his primary education in England before moving to the United States for higher studies. In 1982, he earned his degree in Physics from the University of Oxford, and later, in 1986, he obtained his PhD in Computer Science from Stanford University. It was at Stanford where Russell began to develop a deep interest in AI, influenced by figures like John McCarthy, who is credited with coining the term “artificial intelligence.” After obtaining his PhD, Russell joined the University of California, Berkeley, as a professor of Computer Science, where he has developed much of his academic and research career.
One of the most notable aspects of Russell’s career is his focus on the theoretical foundations of AI and his effort to create systems that are not only capable of performing specific tasks but also of behaving rationally across a wide range of situations. In this regard, one of his most important contributions is his work in the field of “bounded rationality,” which explores how intelligent agents can make optimal decisions, given that their computational resources and available information are limited. This line of research has been crucial for the development of more realistic AI systems, which operate within the constraints imposed by the real world.
Russell is probably best known for his book Artificial Intelligence: A Modern Approach, which he co-wrote with Peter Norvig. First published in 1995, this textbook has been adopted in more than 1,500 universities worldwide and is considered a fundamental reference in AI education. In it, Russell and Norvig provide a comprehensive introduction to the core concepts of AI, covering topics such as problem-solving, planning, machine learning, perception, and natural language processing. The book’s balanced and comprehensive approach has helped shape the education of generations of AI students, solidifying Russell’s reputation as an exceptional educator and communicator in the field.
Beyond his contributions to AI theory and teaching, Stuart Russell has also played a key role in the debate on the ethics of AI and the risks associated with its development. In particular, Russell has been a strong advocate for the creation of safe and controllable AI. In his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control, Russell argues that the traditional approach to AI, which focuses on creating systems that maximize efficiency in achieving a fixed set of goals, is inherently dangerous. According to Russell, one of the primary risks is that an advanced AI, if programmed to blindly pursue goals, could act in ways contrary to human interests. For example, an AI designed to maximize industrial productivity could cause environmental or human harm if appropriate constraints are not imposed.
Russell proposes a new direction for AI research that he calls “human-value-aligned AI.” Rather than programming machines to maximize a specific goal, he argues that we should design them to act in accordance with human values, even when those values are not fully defined. This AI, according to Russell, should be designed to be “uncertain” about human preferences, always willing to adjust based on new information about what humans truly want. This approach introduces deliberate uncertainty into AI systems so that machines cannot harm humans in their pursuit of misunderstood or poorly defined objectives.
Russell’s work in this area has led him to actively participate in global initiatives advocating for the ethical development of AI. He is a co-founder of the Center for Human-Compatible Artificial Intelligence (CHAI), an institution dedicated to researching how to create AI that cooperates effectively with humans rather than competing with or harming them. Through his work at CHAI, Russell has contributed to laying the foundations for a new AI ethics based on the recognition of the importance of human values and the need to design systems that align with these principles.
In addition to his academic research, Russell has played a key role in promoting the debate on AI risks at an international level. He has testified before various government committees and has been a consultant for organizations such as the United Nations and the European Parliament. One of his main concerns is the possible militarization of AI, especially in the form of autonomous weapon systems that could make life-or-death decisions without human intervention. Russell has been one of the most active voices in the campaign to ban “lethal autonomous weapons,” arguing that their use poses significant ethical and practical risks, as they could destabilize global security and make armed conflicts harder to control.
In recognition of his contributions to AI research and his impact on the global debate about AI ethics, Stuart Russell has received numerous awards and honors throughout his career. Among them are the IJCAI Award for Research Excellence (1995), the ACM-AAAI Allen Newell Award (2005), and a fellowship from the Association for the Advancement of Artificial Intelligence (AAAI). These accolades reflect not only his influence in the AI field but also his commitment to finding practical and ethical solutions to the challenges posed by technology.
In terms of legacy, Stuart J. Russell has been a key leader in the development of modern AI and in promoting a more responsible and ethical approach to the research and development of this technology. His work has laid the groundwork for a deeper understanding of how AI systems can interact with the real world, and his emphasis on safety and compatibility with human values has opened new avenues for the study of ethical AI. At the same time, his influence as an educator, through his seminal textbook and his teaching at Berkeley, has shaped thousands of students who today work on the most varied aspects of AI.
Throughout his career, Russell has demonstrated that artificial intelligence is not only a technical issue but also a deeply ethical one, requiring careful reflection on the impact these technologies will have on the world. His vision of AI that benefits humanity without endangering our most fundamental values remains a crucial influence in the field, and his legacy will continue to guide the development of AI in the coming decades.
by Research Team from the Laboratory of the Future | Sep 28, 2024 | Artificial intelligence
JOHN MCCARTHY.
John McCarthy is widely recognized as one of the most influential pioneers in the field of artificial intelligence (AI). Born on September 4, 1927, in Boston, Massachusetts, McCarthy grew up in a family of Irish and Russian immigrants. From an early age, he showed a notable interest in mathematics and science, inclinations that led him to become a key figure in the development of what we now know as AI. This article explores his academic career, his technical contributions to the field, and the legacy he left in the scientific community.
McCarthy attended the University of California, Berkeley, where he initially specialized in mathematics. After earning his bachelor’s degree in 1948, he continued his graduate studies at Princeton, where he completed his PhD in Mathematics in 1951. During his time at Princeton, he was influenced by the rich tradition of mathematical logic developed by figures such as Kurt Gödel and Alan Turing. It was in this context that McCarthy became interested in the intersections between formal logic and the ability of machines to process information intelligently.
One of McCarthy’s most notable achievements was coining the term “artificial intelligence.” During the Dartmouth Conference in 1956, an event that McCarthy helped organize, he and his colleagues presented a series of ideas that formed the conceptual foundation of AI as an independent field of study. At this conference, McCarthy proposed that “intelligence can be described so precisely that a machine can simulate it.” This statement would become the core of AI research in the following decades. The Dartmouth Conference is viewed as the official starting point of AI, and McCarthy as one of its founding fathers.
In the technical realm, one of McCarthy’s greatest legacies is the development of LISP, a programming language that has remained a fundamental tool in AI research. LISP was designed by McCarthy in 1958 as a language for manipulating symbolic data, which made it particularly useful for knowledge representation and logical reasoning in AI. LISP introduced several innovations, including the idea of “lists” as fundamental data structures, and was one of the first languages to implement recursion, a key technique for solving complex problems. Although other, more modern programming languages have emerged, LISP is still used in specialized applications and in teaching advanced AI and programming concepts.
Another of McCarthy’s major achievements was his contribution to the theory of automated systems and the concept of “time-sharing computing.” During his time at MIT in the 1960s, McCarthy helped develop the concept of time-sharing, which allowed multiple users to interact with a computer simultaneously. This innovation dramatically changed the way computing was understood and used and was a precursor to cloud computing and other forms of distributed processing.
McCarthy was also a staunch advocate of the symbolic approach to AI. Unlike other researchers who focused on neural networks and statistical methods, McCarthy believed that AI should be based on the manipulation of symbols and formal logic to simulate human thought. His vision of “strong AI,” that is, AI that could not only perform specific tasks but also develop a form of general reasoning and understand the world similarly to humans, deeply influenced the development of AI in its early decades. While AI techniques have drastically changed since McCarthy’s days, his focus on knowledge representation and logic remains relevant in areas such as automated planning and expert systems.
Throughout his career, McCarthy received numerous awards and recognitions for his contributions to the field of AI. In 1971, he was awarded the Turing Award, considered the “Nobel” of computing, for his role in the invention of LISP and his theoretical contributions to AI. Throughout his life, he continued publishing influential research and promoting his vision of AI, maintaining his belief in the possibility of creating artificial general intelligence (AGI), despite the skepticism of some of his contemporaries.
In addition to his technical contributions, McCarthy was also a deep thinker about the ethical implications of AI. He was one of the first to warn about the potential risks of advanced AI, noting that future intelligent systems should be designed with strong ethical considerations. However, McCarthy was optimistic about humanity’s ability to manage these risks and often advocated for a careful but progressive approach to the development of advanced technologies.
John McCarthy’s legacy transcends his technical contributions. His ability to formulate fundamental questions about the nature of intelligence and his determination to build tools that emulated these capabilities laid the foundation for much of today’s AI research. Although he did not live to see the full realization of his dream of general artificial intelligence, McCarthy was a key figure in the evolution of the field, inspiring generations of scientists and technologists who continue working on the problems he raised.
McCarthy passed away on October 24, 2011, at the age of 84, leaving a lasting legacy in the history of science and technology. His ideas on knowledge representation, formal logic, and programming languages continue to influence current research, while his long-term vision of AI remains a beacon for researchers seeking to understand and replicate human intelligence in machines. In summary, John McCarthy was one of the most influential architects in the founding of artificial intelligence as a field of study. Through his innovations like the LISP programming language and his work on time-sharing computing, McCarthy not only contributed to the development of fundamental tools for AI but also helped define the long-term vision of the field. His legacy lives on in the scientific community, where his ideas continue to inspire new approaches and discoveries in the pursuit of building truly intelligent machines.
by Research Team from the Laboratory of the Future | Sep 3, 2024 | News
The request was made through an open letter. The new European legislation will not be fully enforced until 2026.
Around thirty tech companies, including Meta (Facebook, Instagram) and Spotify, along with researchers and associations, have asked the European Union to clarify its regulations on artificial intelligence (AI) in an open letter.
“Europe has become less competitive and innovative than other regions, and today risks losing even more ground in the AI era due to inconsistent regulatory decisions,” says the letter published on Thursday.
“Recently, regulations have become fragmented and unpredictable,” argue the signatories, who believe that interventions by European authorities “have created much uncertainty about the types of data that can be used to train AI models.” For this reason, they call on European policymakers for “harmonized, consistent, fast, and clear decisions on data regulations in the EU.”
In August, the new European legislation to regulate AI, a global first, officially came into force. Its goal is to promote innovation while protecting privacy. The regulation imposes restrictions on AI systems that pose a danger to society. It also requires systems like ChatGPT to ensure data quality and respect copyright.
The new legislation will not be fully applied until 2026, but some provisions will become binding next year.
Europe and the First AI Regulation Law
The international body is the first to decree a legal framework for the development of this type of technology. The law was approved in March but came into force in August 2024.
The regulation assigns rules to each company using AI systems based on four levels of risk: no risk, minimal risk, high risk, and prohibited AI systems. This categorization also determines the deadlines each company must meet to comply with the new law.
In this regard, with the law’s entry into force, the EU will fully ban certain practices starting February 2025. These include manipulating user decision-making or expanding facial recognition databases through web scraping.
Other AI systems considered high-risk, such as those collecting biometric data or used for critical infrastructures or labor decisions, will have to comply with stricter rules. Among the requirements, companies will have to disclose their AI training data sets and provide proof of human oversight, among others.
Thomas Regnier, spokesperson for the European Commission, stated that “around 85% of current AI companies” fall into the third category of “minimal risk,” with very little regulation required.
The entry into force of the regulation will require EU member states to establish competent national authorities—by August—to oversee its implementation in their countries. Meanwhile, members of the European Commission are preparing to accelerate AI investments, with an expected injection of €1 billion in 2024 and up to €20 billion by 2030.
TEXT AND SIGNATORIES OF THE OPEN LETTER.
A fragmented regulation means that the EU risks missing out on the AI era.
We are a group of companies, researchers, and institutions that are an integral part of Europe and work to serve hundreds of millions of Europeans. We want Europe to succeed and thrive, even in the field of cutting-edge artificial intelligence research and technology. But the reality is that Europe has become less competitive and innovative compared to other regions and now risks falling even further behind in the era of artificial intelligence due to inconsistency in regulatory decision-making.
In the absence of coherent rules, the EU will lose the opportunity to leverage two pillars of AI innovation. The first is developments in “open” models that are made available to everyone for free to use, modify, and develop, multiplying the benefits and spreading social and economic opportunities. The second is the latest “multimodal” models, which function seamlessly across text, images, and voice and will enable the next leap forward in AI. The difference between text-only models and multimodal ones is like the difference between having one sense and having all five.
Cutting-edge open models like Llama (text-based or multimodal) can boost productivity, drive scientific research, and add hundreds of billions of euros to the European economy. Public institutions and researchers are already using these models to accelerate medical research and preserve languages, while established companies and startups are gaining access to tools they could never build or afford on their own. Without them, AI development will happen elsewhere, depriving Europeans of the technological advances enjoyed in the United States, China, and India.
Research estimates that generative AI could increase global GDP by 10% over the next decade, and EU citizens should not be denied this growth.
The EU’s ability to compete with the rest of the world in AI and to leverage the benefits of open models depends on its single market and a shared set of regulatory rules. If companies and institutions are going to invest tens of billions of euros to develop generative AI for European citizens, they need clear rules that are applied consistently and allow the use of European data. But lately, regulatory decision-making has become fragmented and unpredictable, while interventions by European data protection authorities have created enormous uncertainty about which types of data can be used to train AI models. This means that the next generation of open AI models, and the products and services we build on them, will not understand or reflect European knowledge, culture, or languages. The EU will also miss out on other innovations, such as Meta’s AI assistant, which is on track to become the world’s most used AI assistant by the end of this year.
Europe faces a choice that will affect the region for decades.
It can choose to reaffirm the principle of harmonization enshrined in regulatory frameworks like the GDPR so that AI innovation happens here at the same scale and speed as elsewhere, or it can continue rejecting progress, betraying the ambitions of the single market, and watching as the rest of the world builds on technologies that Europeans will not have access to.
We hope that European policymakers and regulators see what is at stake if a course correction does not happen. Europe cannot afford to miss out on the broad benefits that come from open, responsibly built AI technologies, which will accelerate economic growth and enable progress in scientific research. For this, we need harmonized, consistent, fast, and clear decisions within the EU’s data rules framework that allow the use of European data in AI training for the benefit of Europeans. Decisive measures are needed to help unlock the creativity, ingenuity, and entrepreneurial spirit that will ensure Europe’s prosperity, growth, and technical leadership.
Signed up,
Alexandre Lebrun
CEO of Nabla
André Martins
Vice President of Artificial Intelligence Research, Unbabel
Aureliusz Górski
Founder and CEO of CampusAI
Borje Ekholm
President and CEO of Ericsson
Benedicto Macon-Cooney
Senior Policy Strategist, Tony Blair Institute
Christian Klein
CEO of SAP SE
Daniel Ek
Founder and CEO of Spotify
Daniel J. Beutel
Co-founder and CEO of Flower Labs
David Lacombled
President, La villa numeris
Branquias de Diarmuidos
CTO of Criteo
Edgar Riba
President of Kornia AI
Egle Markeviciute
Secretary, Consumer Choice Center Europe
Eugenio Valdano
Doctor in Philosophy
Federico Marchetti
Founder of YOOX
Francesco Milleri
President and CEO of EssilorLuxottica
Georgi Gerganov
ggml.ai
Han Stoffels
CEO of 8vance
Hira Mehmood
Co-founder and Board Member of Bineric AI
Hosuk Lee-Makiyama
Director, ECIPE
Juan Elkann
CEO of Exor
Josef Sivic
Researcher, Czech Institute of Computer Science, Robotics, and Cybernetics, Czech Technical University
Julien Launay
CEO and Co-founder of Adaptive ML
Lorenzo Bertelli
Marketing Director of Prada Group
Maciej Hutyra
CEO of SalesTube Sp. z oo
Marco Baroni
Research Professor, ICREA
Marco Tronchetti Provera
Executive Vice President of Pirelli
Mark Zuckerberg
Founder and CEO of Meta
Miguel Ferrer
Aesthetic Technology
Martín Ott
CEO of Taxfix SE
Matthieu Rouif
CEO of Photoroom
Maurice Lévy
President Emeritus of Publicis Groupe
Máximo Ibarra
General Manager of Computer Engineering SPA
Michal Kanownik
CEO of the Digital Poland Association
Miguel López
CEO of thyssenkrupp AG
Minh Dao
CEO, FULLY AI
Niklas von Weihe
CTO, TOTALMENTE IA
Nicolò Cesa Bianchi
Professor of Computer Science, University of Milan, Italy
Patrick Collison
Patrick Pérez
AI Researcher
Philippe Corrot
Co-founder and CEO of Mirakl
Professor Dagmar Schuller
CEO of AudiEERING
Ralf Gommers
Director, Quansight
Sebastián Siemiatkowski
CEO and Co-founder of Klarna
Simonas Černiauskas
CEO of Infobalt
Stefano da Empoli
President of the Institute for Competitiveness (I-Com)
Stefano Yacus
Senior Research Scientist at Harvard University
Vicente Luciani
CEO of Artefact
Vivian Bouzali
CCCO, METLEN Energy and Metals
Yann Le Cun
VP and Chief AI Scientist, Meta
by Research Team from the Laboratory of the Future | Jun 22, 2023 | Economy and the future of work
The U.S. approves the human consumption of lab-grown chicken meat.
The U.S. food regulatory agency has given the green light for the first time to the commercialization of lab-grown chicken meat in supermarkets and restaurants.
The authorization granted:
The Federal and Drugs Administration (FDA), the U.S. food and drug regulatory agency, has authorized for the first time in its history a lab-grown meat product for human consumption. This authorization applies to chicken meat produced in the facilities of the Californian company Upside Foods and the company Good Meat, which now have approval to bring their products to supermarkets and restaurants in the U.S.
Upside Foods, formerly known as Memphis Meats, will be able to market its lab-grown chicken once the USDA (U.S. Department of Agriculture) has inspected its facilities. The meat is produced by extracting living cells from the animal, which are then placed in stainless steel tanks where they replicate until they form a structure and consistency similar to a chicken fillet.
After evaluating the production and the cultivated cell material used by Upside Foods, the FDA stated that it has “no further questions” regarding the safety of its lab-grown chicken fillet. “The world is experiencing a food revolution, and the U.S. FDA is committed to supporting innovation in the food supply,” said FDA Commissioner Robert Califf and Susan Mayne, director of the FDA’s Center for Food Safety and Applied Nutrition in a statement.
The implementation of this measure marks the beginning of a new era aimed at eliminating animal slaughter and reducing the environmental impacts of grazing, growing food for animals, and animal waste.
“Instead of dedicating so much land and water to feeding animals that will be slaughtered, we can do something different,” said Josh Tetrick, co-founder and CEO of Eat Just, the operator of the multinational Good Meat.
On Wednesday, both companies received approval from the required federal inspectors to sell lab-grown meat and chicken in the U.S.
Additionally, the Food and Drug Administration gave the green light to another manufacturer, Joinn Biologics, which works with Good Meats.
The new lab-grown meat market:
This meat is said to drastically reduce water consumption, land area, and resources used in traditional industrial livestock production. It is also considered more sustainable because it generates less pollution and CO2. It is a global market that has attracted over $2 billion in investments, mainly led by Israel and the U.S.
The Israeli company, Aleph Farms, previously introduced the first rib made with a 3D printer, a significant evolution from the lab-grown ground meat seen so far. Making a steak is much more complicated than recreating ground meat; to replicate a muscle piece, it needs to be given a structure to support it, along with fat and connective tissue. We also have Future Meat, one of the largest lab-grown meat companies in the world, which claims to have a production process that reduces the cost of its chicken breasts from 15 euros to only 6.80 for 450 grams.
The U.S., particularly California, is another place where significant investments are being made in this technology. Three companies: Finless Foods, BlueNalu, and Upside Foods are competing for a share of the cultivated meat market, producing not only chicken, lamb, and beef but also crustaceans and mollusks.
However, there are voices that are less optimistic. French researchers Sghaier Chriki and Jean-François Hocquette published a study titled “The Myth of Cultivated Meat,” questioning whether the industry will be able to artificially produce compounds like hormones and growth factors that are naturally found in animals.
The mass adoption of this type of meat on our tables depends on regulatory agencies. So far, Singapore and the U.S. agencies have taken the path of authorizations, and it is expected that the European agency will do so soon.
More than 150 companies are cultivating meat from animal cells. While the innovation is surprising in the U.S., globally, more than 150 companies are focusing on cultivating meat from cells, not just chicken, but also pork, lamb, fish, and beef, which have a greater environmental impact.
The “lab-grown” chicken is cultivated in steel tanks with cells from a live animal, a fertilized egg, or a special cell bank.
In the case of Upside, the production comes out in large sheets, which are then shaped into chicken cutlets or sausages.
Meanwhile, Good Meat, which already sells lab-grown meat in Singapore, the first country to authorize it in 2020, turns accumulations of chicken cells into ribs, nuggets, and ground meat.
Where to consume the authorized meat:
Nevertheless, this new form of consumption in the United States will not be immediate, nor will it be consumed by everyone.
“Lab-grown chicken is much more expensive than traditional farm-raised chicken. Additionally, it cannot be produced on the same scale as traditional meat,” according to Ricardo San Martín, director of one of the Meat Lab departments at the University of California, Berkeley. For now, companies plan to serve the new food in selected restaurants. In the case of Upside, it has already partnered with the Bar Creen restaurant in San Francisco; while Good Meat’s dishes will be served at the Washington restaurant run by its chef and owner José Andrés. No agreements have yet been made with any supermarket chains.
by Research Team from the Laboratory of the Future | Jun 21, 2023 | News
On Wednesday, June 21, 2023, the Technological Laboratory of Uruguay (LATU) welcomed Microsoft’s Artificial Intelligence Lab to its Technology Park in the Las Orquídeas room, in the presence of President Dr. Luis Lacalle Pou.
LATU’s Technology Park brings together public and private companies and organizations, forming a powerful international ecosystem of technology, education, entrepreneurship, research, creativity, and innovation.
The Microsoft AI Co-Innovation Lab, one of four labs of its kind in the world, will be installed in the park, creating a new landscape of opportunities for business innovation by providing cutting-edge tools and technology.
LATU will serve as a bridge to connect businesses and projects with the lab, accelerating innovation and supporting organizations of various sectors and sizes. It will enable the offering of tailor-made solutions that integrate artificial intelligence and the internet of things to drive business and productive development.