Artificial Intelligence XI How the World Should Respond to the Artificial Intelligence Revolution

Artificial Intelligence XI How the World Should Respond to the Artificial Intelligence Revolution

Pausing AI developments is not enough. We need to shut it all down.

Yudkowsky is a U.S. decision theorist and leads research at the Machine Intelligence Research Institute. He has been working on Artificial General Intelligence alignment since 2001 and is widely regarded as one of the founders of the field.

An open letter published at the end of May 2023 calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped forward and signed it. It is a marginal improvement.

I refrained from signing because I believe the letter underestimates the severity of the situation and asks very little to resolve it.

The key issue is not “human-level intelligence” (as the open letter says); it’s what happens after AI reaches an intelligence greater than human. The key thresholds there may not be obvious, we certainly cannot calculate in advance what will happen and when, and it currently seems imaginable that a research lab could cross critical lines unknowingly.

Many researchers immersed in these topics, including myself, expect that the most likely outcome of building a superhumanly intelligent AI, under any remotely similar circumstances to the current ones, is that literally everyone on Earth will die. Not like “maybe possibly some remote chance,” but like “that’s the obvious thing that would happen.” It’s not that you can’t, in principle, survive by creating something much smarter than you; it’s that it would require precision and preparation and new scientific knowledge, and probably not having AI systems made up of giant, inscrutable sets of fractional numbers.

Without that precision and preparation, the most likely outcome is an AI that doesn’t do what we want and doesn’t care about us or sentient life in general. That kind of care is something that, in principle, could be imbued into an AI, but we are not ready and currently do not know how to do it.

In the absence of that care, we get “AI doesn’t love you, nor does it hate you, and you are made of atoms it can use for something else.”

The likely outcome of humanity facing an opposing superhuman intelligence is total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15,” “the 11th century trying to fight against the 21st century,” and “Australopithecus trying to fight against Homo sapiens.”

To visualize a hostile superhuman AI, don’t imagine a lifeless, intelligent thinker living inside the Internet and sending malicious emails. Imagine an entire alien civilization, thinking millions of times faster than humans, initially confined to computers, in a world of creatures that are, from its perspective, very stupid and slow. A sufficiently intelligent AI will not stay confined to computers for long. In today’s world, it can send DNA strands through email to laboratories that will produce proteins on demand, enabling an AI initially confined to the Internet to build artificial life forms or directly start post-biological molecular manufacturing.

If someone builds an AI that is too powerful, under the current conditions, I expect that all human members of the species and all biological life on Earth will die soon after.

There is no proposed plan for how we could do such a thing and survive. OpenAI’s openly declared intent is to have some future AI do our AI alignment task. Just hearing that this is the plan should be enough to make any sensible person panic. The other leading AI lab, DeepMind, has no plan.

A side note: none of this danger depends on whether AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize and calculate results that meet sufficiently complicated outcome criteria. That said, it would be negligent in my moral duties as a human if I didn’t also mention that we have no idea how to determine if AI systems are self-aware, as we have no idea how to decode anything that happens in the giant inscrutable matrices, and therefore, at some point, unknowingly, we may create digital minds that are truly self-aware and should have rights and should not be property.

The rule that most people aware of these problems would have backed 50 years ago was that if an AI system can speak fluently and says it is self-aware and demands human rights, that should be a barrier to people simply owning that AI and using it beyond that point. We’ve already passed that old line in the sand. And that was probably correct. I agree that current AIs are probably just imitating conversation about self-awareness from their training data. But I point out that, with the little understanding we have of the internal parts of these systems, we actually don’t know.

If that’s our state of ignorance for GPT-4, and GPT-5 is the same giant leap in capability as GPT-3 to GPT-4, I think we will no longer be able to justifiably say “probably not self-aware” if we allow people to build GPT-5. It will just be “I don’t know; no one knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because not being sure means you have no idea what you’re doing. And that’s dangerous, and you should stop.

On February 7, Satya Nadella, CEO of Microsoft, publicly boasted that the new Bing would make Google “come out and show it can dance.” “I want people to know we made them dance,” he said.

This is not how a Microsoft CEO speaks in a sane world. It shows an overwhelming gap between the seriousness with which we take the problem and the seriousness with which we needed to take it 30 years ago.

We’re not going to close that gap in six months.

More than 60 years passed from when the notion of artificial intelligence was first proposed and studied until we reached current capabilities. Solving the safety of superhuman intelligence, not perfect safety, safety in the sense of “not literally killing everyone,” could reasonably take at least half of that time. And what’s tricky about trying this with superhuman intelligence is that if you mess up the first attempt, you can’t learn from your mistakes because you’re dead. Humanity doesn’t learn from error and dusts itself off and tries again, as with other challenges we’ve overcome in our history, because we’re all gone.

Trying to do something right on the first truly critical attempt is an extraordinary task, both in science and engineering. We’re not coming in with anything like the approach that would be required to do it successfully. If we applied the lesser engineering rigor standards that apply to a bridge designed to carry a couple thousand cars to the emerging field of Artificial General Intelligence, the whole field would be shut down tomorrow.

We are not prepared. We are not on track to be prepared in a reasonable time window. There is no plan. Progress in AI capabilities is massive, far ahead of the progress in AI alignment or even understanding what the hell is going on inside those systems. If we really do this, we are all going to die.

Many researchers working on these systems think we are rushing toward a catastrophe, and more of them dare to say it privately than publicly; but they think they can’t unilaterally stop the forward fall, that others will continue even if they personally quit their jobs. And so everyone thinks they might as well keep going too. This is a stupid state of affairs and an undignified way for Earth to die, and the rest of humanity should intervene at this point and help the industry solve its collective action problem.

Some of my friends have recently informed me that when people outside the AI industry first hear about the extinction risk of Artificial General Intelligence, their reaction is “maybe we shouldn’t build AGI then.”

Hearing this gave me a small flicker of hope, because it is a simpler, more sensible, and frankly more reasonable reaction than what I’ve heard in the last 20 years of trying to get someone in the industry to take things seriously. Anyone who speaks like this deserves to hear how serious the situation actually is, and not be told that a six-month moratorium will solve it.

On March 16, my partner sent me this email. (Later they gave me permission to share it here).

“Nina lost a tooth! The usual way kids do it, not by carelessness! Seeing GPT-4 pass those standardized tests on the same day Nina reached a childhood milestone triggered an emotional wave that made me lose my head for a minute. Everything is moving too fast. I’m worried that sharing this might increase your own pain, but I’d rather you know than each of us suffer alone.”

When the internal conversation is about the pain of seeing your child lose their first tooth and thinking they won’t get the chance to grow up, I think we are past the point of playing political chess over a six-month moratorium.

If there were a plan for Earth to survive, if we just approved a six-month moratorium, I would support that plan. There is no such plan.

This is what would actually need to be done:

The moratorium on new large-scale training must be indefinite and global. There can be no exceptions, even for governments or the military. If the policy starts with the US, then China must see that the US is not seeking an advantage but is trying to avoid a terribly dangerous technology that cannot have a true owner and will kill everyone in the US, China, and Earth. If I had infinite freedom to write laws, I might make a single exception for AI to be trained solely to solve problems in biology and biotechnology, not trained on Internet text, and not at the level where they begin to talk or plan; but if that were remotely complicating the problem, I would immediately discard that proposal and say to simply shut everything down.

Turn off all large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all large training races. Put a limit on the amount of computing power anyone can use to train an AI system and move it downward over the coming years to offset the more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent prohibited activities from shifting to other places. Track all GPUs sold. If intelligence says a country outside the agreement is building a GPU cluster, be less afraid of a shooting conflict between nations than of violating the moratorium; be willing to destroy a rogue data center through an airstrike.

Do not frame anything as a conflict between national interests, make it clear that anyone talking about an arms race is a fool. That we all live or die as one, in this, is not policy but a fact of nature. Make explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a total nuclear exchange, and that allied nuclear countries are willing to take some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training races.

That is the kind of policy shift that would make my partner and I hug and tell each other that a miracle happened, and now there’s a chance Nina will live. Sane people who hear about this for the first time and sensibly say “maybe we shouldn’t” deserve to hear, honestly, what would be needed for that to happen. And when their policy request is that big, the only way it will pass is if lawmakers realize that if they do business as usual and do what’s politically easy, that means their own children are going to die too.

We are not ready. We are not on track to be significantly more prepared in the foreseeable future. If we proceed with this, everyone will die, including the children who didn’t choose this and did nothing wrong.


Future Lab Analysis Team / Time Magazine (England). Article by Eliezer Yudkowsky.

Translation from English: Translation and Interpretation Team of the Future Lab.

How should the world respond to the Artificial Intelligence Revolution?

How should the world respond to the Artificial Intelligence Revolution?

Bremmer is a foreign affairs columnist and editor-at-large of TIME. He is the president of Eurasia Group, a political risk consultancy, and of GZERO Media, a company focused on providing smart and engaging coverage of international affairs. He teaches applied geopolitics at the School of International and Public Affairs at Columbia University, and his most recent book is The Power of Crisis.

The growing development of artificial intelligence will produce medical breakthroughs that will save and improve billions of lives. It will become the most powerful engine of prosperity in history. It will give an incalculable number of people, including generations yet to be born, powerful tools that their ancestors never imagined. But the risks and challenges posed by Artificial Intelligence are also becoming clear, and now is the time to understand and address them. Here are the biggest ones.

The health of democracy and free markets depends on access to accurate and verifiable information. In recent years, social media has made it harder to distinguish fact from fiction, but advances in AI will unleash legions of bots that seem much more human than those we’ve encountered to date. Much more sophisticated deepfakes of audio and video will undermine our (already diminished) trust in those who serve in government and those who report the news. In China, and later in its client states, AI will take facial recognition and other tools that can be used for state surveillance to exponentially higher levels of sophistication.

This problem extends beyond our institutions because the production of “generative AI,” artificial intelligence that generates sophisticated written, visual, and other types of content in response to user prompts, is not limited to large tech companies. Anyone with a laptop and basic programming skills already has access to much more powerful AI models than existed just a few months ago and can produce unprecedented volumes of content. This proliferation challenge is about to grow exponentially as millions of people will have their own GPT running on real-time data available on the Internet. The AI revolution will enable criminals, terrorists, and other wrongdoers to code malware, create biological weapons, manipulate financial markets, and distort public opinion with astonishing ease.

Artificial intelligence can also exacerbate inequality, both within societies among small groups with wealth, access, or special skills, and between wealthier and poorer nations.

Artificial Intelligence will create disruption in the workforce. While technological advances in the past have mainly created more jobs than they have eliminated and increased productivity and prosperity in general, there are crucial caveats. Jobs created by major technological changes in the workplace require skill sets different from those they have destroyed, and the transition is never easy. Workers need to be retrained. Those who cannot retrain must be protected by a social safety net that varies in strength from place to place. Both issues are costly, and it will never be easy for governments and private companies to agree on how to share this burden.

More fundamentally, the displacement caused by Artificial Intelligence will happen on a much broader and much faster scale than past transitions. The disruption of the transition will generate economic and therefore political upheaval worldwide.

Finally, the AI revolution will also impose an emotional and spiritual cost. Humans are social animals. We thrive on interaction with others and wither in isolation. Too often, bots will replace humans as companions for many people, and by the time scientists and doctors understand the long-term impact of this trend, our growing dependence on artificial intelligence, even for companionship, may be irreversible. This may be the most important challenge of AI.

Challenges like these will require a global response. Today, artificial intelligence is not regulated by government officials but by tech companies. The reason is simple: you cannot create rules for a game you do not understand. But relying on tech companies to regulate their products is not a sustainable plan. They primarily exist to make a profit, not to protect consumers, nations, or the planet. It’s a bit like letting energy companies lead strategies to combat climate change, except that warming and its dangers are already understood in ways that the risks of AI are not, leaving us without lobbying groups that can help push for the adoption of smart and healthy policies.

So, where are the solutions? We will need national action, global cooperation, and some common-sense cooperation from the governments of the United States and China, in particular.

It will always be easier to achieve well-coordinated policy within national governments than at the international level, but political leaders have their own priorities. In Washington, policymakers have focused mainly on winning a race with China to develop the technological products that best support 21st-century security and prosperity, and this has encouraged them to give tech companies serving the national interest something akin to freedom. Chinese lawmakers, fearful that AI tools could undermine their political authority, have regulated much more aggressively. European lawmakers have focused less on security or profits and more on the social impact of AI advances.

But all will have to establish rules in the coming years that limit AI bots’ ability to undermine political institutions, financial markets, and national security. This means identifying and tracking bad actors, as well as helping people separate real information from false. Unfortunately, these are big, costly, and complicated steps that lawmakers are unlikely to take until they face AI-generated (but real) crises. That cannot happen until the discussion and debate on these issues begin.

Unlike climate change, the world’s governments have yet to agree that the AI revolution poses an existential cross-border challenge. Here, the United Nations has a role to play as the only institution with the convening power to develop a global consensus. A UN-led AI approach will never be the most efficient answer, but it will help achieve a consensus on the nature of the problem and marshal international resources.

By forging an agreement on which risks are most likely, most impactful, and emerging most rapidly, an AI-focused equivalent of the Intergovernmental Panel on Climate Change can regulate meetings and the production of agreements on the “State of AI” that increasingly delve into the heart of AI-related threats. Just like with climate change, this process will also need to include the participation of public policy officials, scientists, technologists, private sector delegates, and individual activists representing the majority of member states to create a COP (Conference of the Parties) process to address threats to biosafety, information freedom, workforce health, etc. There could also be an artificial intelligence agency inspired by the International Atomic Energy Agency to help monitor AI proliferation.

That said, there is no way to address the rapidly metastasizing risks created by the AI revolution without a much-needed infusion of common sense into U.S.-China relations. After all, it is the technological competition between the two countries and their major tech companies that creates the greatest risk of war, especially as AI plays an increasingly larger role in weapons and military planning.

Beijing and Washington must develop and maintain high-level discussions about the emerging threats to both countries (and the world) and the best ways to contain them. And they cannot wait for an AI version of the Cuban Missile Crisis to force them into genuine transparency in handling their competition. To create an “AI arms control agreement” with mutual monitoring and verification, each government must listen not only to each other but also to the technologists on both sides who understand the risks that need to be contained.

Crazy? Absolutely. The timing is terrible because these advancements are coming at a time of intense competition between two powerful countries that truly do not trust each other.

But if the Americans and Soviets were able to build a functioning arms control infrastructure in the 1970s and 1980s, the U.S. and China can build an equivalent for the 21st century. Let’s hope they realize they have no other option before a catastrophe makes it inevitably obvious.


Team of the Future Lab Analysis / Time Magazine (England). Article by Ian Bremmer. Translation from English: Translation and Interpretation Team of the Future Lab.

Artificial Intelligence IX – Brazilian Regulation

Artificial Intelligence IX – Brazilian Regulation

Bill on Artificial Intelligence Regulation of the Chamber of Deputies of the Federative Republic of Brazil.

For most members of the Chamber of Deputies of the Federative Republic of Brazil, the artificial intelligence framework will encourage technological development. It is important to highlight that the country is the first in Latin America to address this issue. The Bill has received partial approval, awaiting treatment and approval by the Senate. The initiative was presented under the administration of President Jair Mesias Bolsonaro. Subsequently, as we must report, progress has been made in the presentation of projects that contribute to an improved complement of what has already been discussed, even in the Federal Senate.

For Deputies, the artificial intelligence framework will encourage technological development.

The majority of Deputies assessed that the definition of principles for the application of Artificial Intelligence in Brazil, a topic of Bill 21/20, will encourage the country’s technological development. The text was approved in the plenary of the Chamber of Deputies and, following constitutional procedures, will be forwarded to the Senate for consideration.

The rapporteur of the Bill, Deputy Luisa Canziani (PTB from the State of Paraná), stated that she limited the text and the guidelines to be used by the public authorities when regulating the application of artificial intelligence, in order to avoid creating rules that would discourage its adoption. She reminded that some states are already creating their own rules, which is why it is necessary to establish national legislation on the matter. “We took the best from international experiences in regulating artificial intelligence when drafting this text. If we do not approve this matter, we will inhibit investments related to innovation and intelligence.”

The author of the proposal, Deputy Eduardo Bismarck (PDT, from the State of Ceará), stated that the approval of a legal framework for the sector signals to the world that Brazil is paying attention to innovation and artificial intelligence. “Artificial intelligence is already part of our reality, and Brazil will make other laws in the future. The time is now, and it is time to outline principles: rights, duties, and responsibilities.”

The proposal was criticized by Deputy Leo de Brito (PT- State of Acre), who requested more specific rules. After including issues such as state responsibility in the text, an agreement was made in favor of the text. According to Deputy de Brito, “we were addressed on some fundamental issues, so we withdrew our opposition.”

For Deputy Paulo Ganime (Novo- State of Rio de Janeiro), the project is “just right.” “In this case, the framework is intended to promote technological development, the evolution of artificial intelligence in Brazil, the generation of employment and work, and greater legal security for a sector that is still in the process of development and where Brazil can become a pioneer.”

Deputy Eduardo Cury (PSDB- State of São Paulo) emphasized that the proposal is the starting point for the regulation of the issue. “The project is correctly adjusted, with the beginning of regulation that does not go into so much detail as to inhibit innovation.”

Final Draft of the Bill Presented:

The final draft of the Bill approved by the Chamber of Deputies is presented below, in its original language, to more faithfully respect its content.

final draft

bill no. 21-A of 2020

Establishes the foundations, principles, and guidelines for the development and application of artificial intelligence in Brazil; and other provisions.

THE NATIONAL CONGRESS decrees:

Art. 1º This Law establishes the foundations and principles for the development and application of artificial intelligence in Brazil, as well as guidelines for promoting and acting in this area by the public authorities.

Art. 2º For the purposes of this Law, an artificial intelligence system is considered to be a system based on a computational process that, from a set of objectives defined by humans, can, through the processing of data and information, learn to perceive and interpret the external environment, as well as interact with it, making predictions, recommendations, classifications, or decisions, and that uses, without being limited to, techniques such as:

I – Machine learning systems, including supervised, unsupervised, and reinforcement learning;

II – Systems based on knowledge or logic;

III – Statistical approaches, Bayesian inference, research, and optimization methods.

Sole paragraph. This Law does not apply to automation processes exclusively guided by predefined programming parameters that do not include the system’s ability to learn to perceive and interpret the external environment, as well as interact with it, based on actions and information received.

Art. 3º The application of artificial intelligence in Brazil aims at scientific and technological development, as well as:

Continuation of the approved Bill drafting:

Art. 3º (continuation):
The application of artificial intelligence in Brazil aims to:

I – Promote sustainable and inclusive economic development and the well-being of society;
II – Increase Brazilian competitiveness and productivity;
III – The competitive integration of Brazil into global value chains;
IV – Improvement in the provision of public services and the implementation of public policies;
V – Promotion of research and development to stimulate innovation in productive sectors;
VI – Protection and preservation of the environment.

Art. 4º:
The development and application of artificial intelligence in Brazil are based on the following principles:

I – Scientific, technological development, and innovation;
II – Free initiative and free competition;
III – Respect for ethics, human rights, and democratic values;
IV – Free expression of thought and free expression of intellectual, artistic, scientific, and communication activity;
V – Non-discrimination, plurality, respect for regional diversity, inclusion, and respect for the fundamental rights and guarantees of citizens;
VI – Recognition of its digital, transversal, and dynamic nature;
VII – Encouragement of self-regulation through the adoption of codes of conduct and good practice guides, in accordance with the principles established in Article 5 of this Law and with global best practices;
VIII – Security, privacy, and protection of personal data;
IX – Information security;
X – Access to information;
XI – National defense, state security, and national sovereignty;
XII – Freedom in business models, as long as it does not conflict with the provisions of this Law;
XIII – Preservation of the stability, security, resilience, and functionality of artificial intelligence systems, through the adoption of technical measures compatible with international standards and promoting best practices;
XIV – Protection of free competition and against abusive market practices, according to Law No. 12,529 of November 30, 2011; and
XV – Harmonization with Laws No. 13,709 of August 14, 2018 (General Data Protection Law), 12,965 of April 23, 2014, 12,529 of November 30, 2011, 8,078 of September 11, 1990 (Consumer Protection Code), and 12,527 of November 18, 2011.

Sole paragraph: The codes of conduct and good practice guides mentioned in item VII of the head of this article may serve as indicative elements of compliance.

Art. 5º:
The principles for the development and application of artificial intelligence in Brazil are as follows:

I – Beneficial purpose: seeking beneficial outcomes for humanity through artificial intelligence systems;
II – Human-centeredness: respect for human dignity, privacy, personal data protection, and fundamental rights when the system deals with matters related to human beings;
III – Non-discrimination: mitigation of the possibility of using systems for discriminatory, unlawful, or abusive purposes;
IV – Pursuit of neutrality: recommending that the agents involved in the development and operation of artificial intelligence systems seek to identify and mitigate biases contrary to current legislation;
V – Transparency: the right of individuals to be clearly, accessibly, and accurately informed about the use of artificial intelligence solutions, unless otherwise provided by law and respecting trade and industrial secrets, in the following cases:
a) When interacting directly with artificial intelligence systems, such as in the case of chatbots for personalized online service;
b) About the identity of the natural person operating the system autonomously, or the legal entity responsible for the operation of the artificial intelligence system;
c) About the general criteria guiding the operation of the artificial intelligence system, respecting trade and industrial secrets when there is a significant risk to fundamental rights.
VI – Security and prevention: using technical, organizational, and administrative measures compatible with best practices, international standards, and economic feasibility, to allow for the management and mitigation of risks arising from the operation of artificial intelligence systems throughout their lifecycle and continuous operation;
VII – Responsible innovation: ensuring the adoption of this Law’s provisions by the agents operating in the development and operation of artificial intelligence systems, documenting their internal processes, and assuming responsibility for the outcomes of such systems;
VIII – Data availability: no violation of copyright in the use of data, databases, and texts protected for training artificial intelligence systems, as long as it does not affect the normal exploitation of the work by its owner.

Art. 6º:
The public authorities, when regulating the application of artificial intelligence, must observe the following guidelines:

I – Subsidiary intervention: specific rules should only be developed when absolutely necessary to ensure compliance with current legislation;
II – Sectoral action: the public authorities’ actions should be carried out through the competent body or entity, considering the context and regulatory framework of each sector;
III – Risk-based management: the development and use of artificial intelligence systems must consider concrete risks. Definitions on the need for regulation and the level of intervention should be proportional to the concrete risks presented by each system and the probability of these occurring, always in comparison with:
a) The social and economic benefits the artificial intelligence system offers;
b) The risks presented by similar systems that do not involve artificial intelligence;
IV – Social and interdisciplinary participation: the adoption of rules affecting the development and operation of artificial intelligence systems will be evidence-based and preceded by public consultation, preferably online, with broad prior disclosure;
V – Regulatory impact analysis: before adopting rules affecting the development and operation of artificial intelligence systems, a regulatory impact analysis should be conducted, according to Decree No. 10,411 of June 30, 2020, and Law No. 13,874 of September 20, 2019;
VI – Responsibility: the rules regarding the responsibility of the agents operating in the development and operation of artificial intelligence systems must be based on subjective responsibility and consider the effective participation of the agents, specific harms to be avoided or remedied, and how these agents can demonstrate their compliance with the applicable standards.

§ 1º: In the risk-based management mentioned in item III, in cases of low risk, responsible innovation will be encouraged through the use of flexible regulatory techniques.

§ 2º: In the risk-based management mentioned in item III, when high risk is identified, the public administration may, within its competence, request information on the security and prevention measures listed in item VI of Art. 5 and their respective safeguards, respecting the transparency limits established by this Law.

§ 3º: When the use of the artificial intelligence system involves consumer relations, the agent will be responsible for repairing the damages caused to consumers, within the limit of their effective participation in the damage, according to Law No. 8,078 of September 11, 1990 (Consumer Protection Code).

§ 4º: Legal entities under public and private law providing public services will be responsible for damages caused by their agents in that capacity, with the right of recovery against the responsible party in cases of fraud or negligence.

Art. 7º
The guidelines for the actions of the Union, the States, the Federal District, and the Municipalities regarding the use and promotion of artificial intelligence systems in Brazil are as follows:

I – Promotion of trust in artificial intelligence technologies, through the dissemination of information and knowledge about their ethical and responsible uses;
II – Encouragement of investments in artificial intelligence research and development;
III – Promotion of the technological interoperability of artificial intelligence systems used by the public administration, in order to allow information exchange and streamline procedures;
IV – Encourage the development and adoption of artificial intelligence systems in the public and private sectors;
V – Stimulate the training and preparation of individuals for the restructuring of the labor market;
VI – Promote innovative pedagogical practices, with a multidisciplinary perspective and an emphasis on the importance of redefining teacher training processes to address the challenges derived from the insertion of artificial intelligence as a pedagogical tool in the classroom;
VII – Stimulate the adoption of regulatory instruments that foster innovation, such as regulatory sandbox environments, regulatory impact analysis, and sectoral self-regulation;
VIII – Encourage the creation of mechanisms for transparent and collaborative governance, with the participation of public authorities, the business sector, civil society, and the scientific community;
IX – Promotion of international cooperation, by encouraging the exchange of knowledge about artificial intelligence systems and the negotiation of treaties, agreements, and global technical standards that facilitate the interoperability between systems and the harmonization of legislation on this subject.

Sole paragraph: For the purposes of this article, the federal public administration will promote strategic management and guidance on the transparent and ethical use of artificial intelligence systems in the public sector, in accordance with strategic public policies for the sector.

Art. 8º
The guidelines established in Articles 6 and 7 of this Law will be applied according to the regulations of the federal Executive Power, through sectoral bodies and entities with technical competence in the matter, which must:

I – Monitor the risk management of artificial intelligence systems, in the specific case, evaluating the risks of their application and the mitigation measures within their area of competence;
II – Establish rights, duties, and responsibilities;
III – Recognize self-regulation institutions.

Art. 9º
For the purposes of this Law, artificial intelligence systems are technological representations derived from the field of computer science and computing, and it is exclusively the responsibility of the Union to legislate and regulate the matter in order to promote legal uniformity throughout the national territory, in accordance with item IV of the caput of Art. 22 of the Federal Constitution.

Art. 10
This Law comes into force 90 (ninety) days after its official publication.

Session Room, on September 29, 2021.

Deputy LUISA CANZIANI
Rapporteur


Sources

Analysis team of the Future Laboratory / Information System of the Chamber of Deputies of the Federative Republic of Brazil. With the collaboration of the Chamber of News Agency.

Translation in the central elements, translation team of the Future Laboratory.

Proposal for Artificial Intelligence Legislation EU VIII

Proposal for Artificial Intelligence Legislation EU VIII

The European Union’s Artificial Intelligence Regulation Law

The European Union, after listening to the opinions of various organizations, has developed the so-called Artificial Intelligence Law, which has become the beginning of regulation in this complex topic. This was done after consulting specialists, organizations, and universities. It is a very extensive subject, so we make both the law and the different opinions expressed by the specialists available to readers. This should be considered very important, since – with the exception of the Chamber of Deputies of the Federative Republic of Brazil – regional governments have not yet addressed a matter of such importance.

What is the EU AI Law?

The AI Law is a proposed European law on artificial intelligence (AI), the first such law from a major regulator anywhere in the world. The law assigns AI applications to three categories of risk. First, applications and systems that create an unacceptable risk, such as government-managed social scoring like that used in the People’s Republic of China, are prohibited.

Second, high-risk applications, such as a CV scanning tool that classifies job applicants, are subject to specific legal requirements.

Finally, applications that are not explicitly prohibited or categorized as high-risk are largely left unregulated.

Consult the EU AI Law via this link.

Why should we care?

Artificial intelligence applications influence the information you see online by predicting what content you are drawn to, capture and analyze facial data to enforce laws or personalize advertisements, and are used to diagnose and treat cancer, for example. In other words, AI affects many parts of people’s lives.

Just like the European Union’s General Data Protection Regulation (GDPR) in 2018, the EU AI Law could become a global standard, determining the extent to which AI has a positive rather than negative effect on people’s lives wherever they are. The EU AI regulation is already causing a stir internationally. By the end of September 2021, Brazil’s Congress approved a bill creating a legal framework for artificial intelligence. It still needs to pass through the country’s Senate.

Can the regulation be improved?

There are several gaps and exceptions in the proposed law. These deficiencies limit the law’s ability to ensure AI remains a force for good in people’s lives. Currently, for example, facial recognition by the police is prohibited unless the images are captured with a delay or the technology is being used to find missing children.

Moreover, the law is inflexible. If, in two years, a dangerous AI application is used in an unforeseen sector, the law does not provide any mechanism to label it as “high-risk.”

More Detailed Analyses:

This section includes a handful of analyses of the AI Law, among many hundreds, which we have selected. We have chosen these analyses because, in our opinion, they contain constructive ideas and invite reflection on how to improve the law.

Future of Life Institute:

The Future of Life Institute (FLI), an independent nonprofit organization aimed at maximizing the benefits of technology and minimizing its associated risks, shared its recommendations for the EU AI Law with the European Commission. They argue that the law should ensure AI providers consider the impact of their applications on society as a whole, not just on the individual. AI applications that cause minimal harm to individuals could cause significant harm at the societal level. For example, a marketing application used to influence citizens’ electoral behavior could affect election outcomes. Read more of the recommendations at the following link here.

University of Cambridge Institutions:

The Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk, two leading institutions at the University of Cambridge, provided their feedback on the EU AI law proposal to the European Commission. They hope the law will help establish international standards to enable the benefits and reduce the risks of AI. One of their recommendations is to allow changes to be proposed to the list of restricted and high-risk systems, increasing the flexibility of the regulation. Read the full reasoning here.

Access Now Europe:

Access Now, an organization that advocates for and extends the digital rights of at-risk users, has also provided feedback on the EU AI Law. They are concerned that the law, in its current form, fails to achieve the goal of protecting fundamental rights. Specifically, they do not believe the proposal goes far enough to protect fundamental rights concerning biometric applications such as emotion recognition and AI polygraphs. The current draft of the AI Law calls for transparency obligations for these applications, but Access Now recommends stricter measures to reduce all associated risks, such as bans. Read their concrete suggestions here.

Michael Veale and Frederik Zuiderveen Borgesius:

Michael Veale, Assistant Professor at University College London in Digital Rights and Regulation, and Frederik Zuiderveen Borgesius, Professor of ICT and Private Law at Dutch Radboud University, provide a comprehensive analysis of some of the most sensitive parts of the EU AI Law. One of the many surprising ideas in their article is that compliance with the law would almost entirely depend on self-assessment. Self-assessment means there is no enforcement to comply with the law. Once standardization bodies like CEN and CENELEC publish their standards, third-party verification under the law will no longer be necessary. The full article can be found here.

The Future Society:

The Future Society, a nonprofit organization based in Estonia advocating for the responsible adoption of AI for the benefit of humanity, sent its comments to the European Commission on the EU AI Law. One of their suggestions is to ensure governance continues to respond to technological trends. This could be achieved by improving the flow of information between national and European institutions and systematically compiling and analyzing incident reports from member states. Read the full comments here.

Nathalie A. Smuha and colleagues:

Nathalie A. Smuha, researcher at the KU Leuven Faculty of Law, Emma Ahmed-Rengers, PhD researcher in Law and Informatics at the University of Birmingham, and their colleagues argue that the EU AI Law does not always accurately recognize the errors and harms associated with different types of AI systems nor assign responsibility appropriately. They also claim the proposal does not provide an effective framework for enforcing legal rights and duties. The proposal does not ensure meaningful transparency, accountability, and public participation rights. Read the full article here.

The European DIGITAL SME Alliance:

The European DIGITAL SME Alliance, a network of small and medium-sized ICT businesses in Europe, welcomes harmonized AI regulation and focuses on ethical AI in the EU but suggests many improvements to avoid overburdening SMEs. For example, they argue that whenever compliance assessments are based on standards, SMEs should actively participate in the development of these standards. Otherwise, the standards may be drafted in ways that are impractical for SMEs. Many other recommendations can be read here.

The Cost of the EU AI Law:

The Centre for Data Innovation, a nonprofit organization focused on data-driven innovation, published a report stating that the EU AI Law will cost 31 billion euros over the next five years and will reduce AI investments by nearly 20%. Entrepreneur Meeri Haataja and academic Joanna Bryson published their own research, arguing that it will likely be much cheaper, as the regulation primarily covers a small proportion of AI applications considered high-risk. Additionally, the cost analysis does not consider all the benefits of the regulation to the public. Finally, CEPS, a think tank and forum for discussion on EU affairs, published its own analysis of the cost estimates and reached a similar conclusion to Haataja and Bryson.

Social Harm and the Law:

Nathalie Smuha distinguishes social harm from individual harm in the context of the AI Law. Social harm does not relate to the interests of any particular individual, but considers harm to society in general, beyond the sum of individual interests. She argues that the proposal remains focused almost exclusively on individual harm concerns and seems to overlook the need for protection against social harms from AI. The full paper can be read here.

The Role of Standards:

Researchers from Oxford Information Labs discuss the role the EU Artificial Intelligence Law gives to standards for AI. The key point they highlight is that compliance with harmonized standards will create a presumption of compliance for high-risk AI applications and services. This, in turn, could increase confidence that they meet the complex requirements of the proposed regulation and create strong incentives for the industry to comply with European standards. Find the extensive analysis of the role of standards in EU AI regulation here.


Analysis Team of the Future Lab/European Union Information System.

Artificial Intelligence VII Note by Open AI.

Artificial Intelligence VII Note by Open AI.

Should artificial intelligence be regulated?

The creators and developers of ChatGPT believe that new superintelligence could surpass human experts in most disciplines within the next 10 years and are calling for supra-regulation. However, its main responsible, Sam Altman, does not find the regulation proposed by the European Union acceptable.

The creators of ChatGPT have published a note (which can be seen almost below) where they warn that in ten years, AI systems could surpass human experts in most areas. This superintelligence, they say, will be more powerful than other technologies humanity has faced in the past and poses an existential risk to our civilization. Therefore, they urge authorities to think about how to manage it. This is material to examine and, above all, to consider why the creators of the technology are theoretically so alarmed…

Sam Altman, Greg Brockman, Ilya Sutskever, three of the co-founders of OpenAI, the company behind ChatGPT, believe that future superintelligence will be even more capable than general artificial intelligence—a form of synthetic intelligence comparable to human intelligence—and will be as productive as today’s largest companies.

But, it’s important that we look at the content of that note:

The note from the three founders is from May 22, 2023, and it reads as follows:
Responsible Artificial Intelligence, Safety, and Alignment. Given the current state of affairs, it is conceivable that within the next ten years, artificial intelligence systems could surpass the skill level of experts in most domains and perform as much productive activity as one of today’s largest corporations. In terms of potential advantages and disadvantages, superintelligence will be more powerful than other technologies humanity has had to deal with in the past. We could have a dramatically more prosperous future, but we must manage the risk to get there. Given the possibility of existential risk, we cannot simply be reactive. Nuclear energy is a historical example of a commonly used technology with this property; synthetic biology is another example. We must also mitigate the risks of current AI technology, but superintelligence will require special treatment and coordination.
A starting point: There are many ideas that matter to us for having a good chance of successfully navigating this development; here we present our initial thoughts on three of them. First, we need some degree of coordination among the main development efforts to ensure that the development of superintelligence happens in a way that allows us to maintain safety and help the smooth integration of these systems with society. There are many ways this could be implemented: major governments worldwide could establish a project that involves many of the current efforts, or we could collectively agree (with the backing power of a new organization as suggested below) that the growth rate of AI capabilities at the frontier is limited to a certain rate per year. And, of course, individual companies must be subject to an extremely high standard of responsible conduct.
Second, we will likely eventually need something like the IAEA (International Atomic Energy Agency) for superintelligence efforts; any effort above a certain capability threshold (or resources like computing) should be subject to an international authority that can inspect systems, require audits, test compliance with safety standards, impose restrictions on implementation degrees and safety levels, etc. Monitoring the use of computing and energy could be very helpful and give us some hope that this idea could be implemented. As a first step, companies could voluntarily agree to start implementing elements of what such an agency might one day require, and as a second step, individual countries could implement it. It would be important for such an agency to focus on reducing existential risk and not on issues that should be left to individual countries, such as defining what an AI should be allowed to say.
Third, we need the technical ability to make superintelligence safe. This is an open research question that we and others are putting a lot of effort into.
What is not within scope: We believe it is important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits). Today’s systems will create enormous value in the world and, while they carry risks, the level of those risks feels in line with other internet technologies, and the probable societal approaches seem appropriate. In contrast, the systems we are concerned about will have power beyond anything created so far, and we must be careful not to dilute the focus on them by applying similar standards to technology far below this bar.
Public Opinion and Potential: However, governance of the most powerful systems, as well as decisions related to their deployment, must have strong public oversight. We believe people around the world should democratically decide the limits and default values of AI systems. We still don’t know how to design such a mechanism, but we plan to experiment with its development. We still think that, within these broad limits, individual users should have a lot of control over how the AI they use behaves.
Given the risks and difficulties, it’s worth considering why we are building this technology. At OpenAI, we have two fundamental reasons. First, we believe it will lead to a much better world than we can imagine today (we’re already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces many problems that we will need much more help to solve; this technology can improve our societies, and the creative capacity of everyone to use these new tools will surely astonish us. Economic growth and improved quality of life will be astounding.
Second, we believe that stopping the creation of superintelligence would be unintuitive, risky, and difficult. Because the benefits are so enormous, the cost of building it decreases each year, the number of actors building it is rapidly increasing, and it is inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that is not guaranteed to work. So we have to do it right.

La visión de OpenAI, respaldada por el millonario apoyo de Microsoft, es optimista respecto al futuro de la inteligencia artificial, considerando que las futuras generaciones de IA podrían traer consigo un mundo mucho más próspero. Sin embargo, también reconocen el gran desafío de gestionar el riesgo existencial que implican estas tecnologías, sobre todo en su etapa de superinteligencia, que podría superar las capacidades humanas en múltiples áreas.

Los creadores de ChatGPT y otros sistemas de inteligencia artificial coinciden en que, aunque la superinteligencia es una meta atractiva, detener el desarrollo en este momento sería arriesgado y difícil, dado que las ventajas potenciales son enormes y los actores involucrados en su creación se están multiplicando rápidamente. En lugar de detener su progreso, prefieren abordar el problema mediante un enfoque proactivo y controlado para minimizar riesgos y asegurar una transición segura a esta nueva era.

En cuanto a las medidas propuestas para mitigar el riesgo existencial, plantean tres puntos clave:

  1. Desarrollo Coordinado de la Tecnología: Es esencial que los esfuerzos de desarrollo estén bien coordinados entre gobiernos y empresas para integrar la IA de forma responsable en la sociedad, asegurando que se respeten estándares mínimos de seguridad. OpenAI sugiere dos formas de hacerlo: una, liderada por gobiernos globales con participación de los desarrolladores clave; y otra, mediante un acuerdo colectivo respaldado por una nueva organización que limite el ritmo de crecimiento de la IA.
  2. Organismo Regulador Global Similar al OIEA: El segundo paso sería crear una organización internacional para supervisar el desarrollo de la superinteligencia, similar al OIEA para la energía nuclear. Esta entidad debería supervisar cualquier esfuerzo que supere un umbral significativo de capacidad, asegurando que las tecnologías avanzadas sean seguras y no representen un riesgo existencial. De forma preliminar, OpenAI sugiere que las compañías ya podrían empezar a implementar estas medidas de forma voluntaria.
  3. Capacidades Técnicas para la Seguridad de la Superinteligencia: Aseguran que es fundamental avanzar en la investigación para lograr una superinteligencia segura, a lo que OpenAI y otros están dedicando muchos recursos.

A pesar de estas sugerencias, OpenAI no está contenta con la regulación propuesta por la Unión Europea, que ha adoptado un enfoque más restrictivo sobre la inteligencia artificial. Sam Altman, CEO de OpenAI, ha expresado que la compañía podría incluso dejar de operar en Europa si las leyes europeas sobre IA se aplican tal y como están, pues considera que algunas de las restricciones son demasiado estrictas. La ley de la UE establece tres niveles de riesgo para las tecnologías de IA: uno que prohíbe ciertos usos (como los sistemas de puntuación social), otro que impone requisitos legales específicos, y un tercero para sistemas de IA que no se consideran de alto riesgo y que quedarían en gran medida sin regular.

Este conflicto pone de manifiesto la tensión entre la necesidad de regulación para garantizar la seguridad y la flexibilidad que las empresas tecnológicas exigen para innovar rápidamente. En este contexto, OpenAI reclama que, si bien el desarrollo de IA debe ser responsable y seguro, no debe haber restricciones que puedan frenar el progreso y los beneficios potenciales de la inteligencia artificial.

En resumen, OpenAI se enfrenta a un dilema: avanzar hacia un futuro de superinteligencia con cautela, pero sin frenar su desarrollo, mientras que al mismo tiempo intenta lidiar con las presiones regulatorias que surgen a nivel global.

error: Content is protected !!