Artificial Intelligence XIV: A Philosophical Vision

Artificial Intelligence XIV: A Philosophical Vision

A PHILOSOPHICAL VISION OF ARTIFICIAL INTELLIGENCE. “WEAK DEMOCRACIES, CAPITALISM, AND ARTIFICIAL INTELLIGENCE ARE A DANGEROUS COMBINATION”

Mark Coeckelbergh: “Weak democracies, capitalism, and artificial intelligence are a dangerous combination.” The philosopher points out that institutions need to rely on experts to regulate technology, but without forgetting the citizens.

Mark Coeckelbergh has focused the attention of an audience unaccustomed to philosophical debates: engineering students filled a room to listen to this expert in technology ethics, invited by the Robotics and Industrial Informatics Institute of the Universitat Politècnica de Catalunya. Coeckelbergh, a prolific author — two of his books are published in Spanish by Cátedra, Ethics of Artificial Intelligence (2021) and Political Philosophy of Artificial Intelligence (2023) — knows how important it is to build bridges between those who develop technologies and those who must think about how to use them.

Question: Do you think that students, engineers, and major tech companies take the ethical aspects of artificial intelligence (AI) into account?

Answer: People are aware that this technology will affect our lives because it’s everywhere, but at the same time, we are confused because the changes are very fast and complex. That’s why I think it’s important that education and research try to find an interdisciplinary path between philosophy, programming, and robotics to address these ethical issues.

Question: And what about politics?

Answer: Yes, we need to create more links between experts and politicians, but not just technical opinions should matter. We need to figure out how we can organize our democracy to consider the vision of experts, yet still make decisions ourselves. Tech companies are gaining more and more power, and this is a problem because the sovereignty of nations and cities is diminishing. How much of our technological future should be left in the hands of private initiatives, and how much should be public and controlled by democracies?

Question: Is artificial intelligence a threat to democracy, or are democracies already weakened?

Answer: Democracy is already vulnerable because we don’t really have complete democracies. It’s like when Gandhi was asked what he thought of Western civilization, and he said it was a good idea. The same goes for democracy: it’s a good idea, but we don’t have it fully. For me, it’s not enough to vote and have majorities, it’s too vulnerable to populism, not sufficiently participatory, and it doesn’t take citizens seriously. There’s a lack of education and knowledge to achieve real democracy, and the same is true for technology. People have to understand that technology is also political, and we need to ask ourselves whether it’s good for democracy that communication infrastructures like Twitter are in private hands.

We use technology uncritically, and while a few benefit, the rest of us are exploited for our data.

Question: In what way does artificial intelligence threaten democracy?

Answer: We deal with technology without thinking; we use it uncritically, but it shapes us and uses us as instruments for power, control, and exploitation of our data. And while a few benefit, the rest of us are exploited for our data. This affects democracies because, not being very resilient, political trends are even more polarized by technology. This combination of weak democracies, capitalism, and artificial intelligence is dangerous. But I do believe it can be used in a more constructive way, to improve life for everyone and not just a few.

Question: Some see artificial intelligence as a way to work less and have more freedom, while others see it as a threat to their jobs.

Answer: I think AI right now empowers those who already have a privileged position or good education: for example, they can use it to start a company. But there will be changes in employment, and there will be some transformation of the economy, so we need to be prepared. On the other hand, the argument that technology makes things easier… Until now, it has led to precarious jobs, like Uber drivers, and to jobs that may be good but stressful. For example, we are all slaves to email, and it came as a solution.

Question: So, the problem is not so much the technology but the system.

Answer: It’s a combination of both things, but indeed, these new technological possibilities force us to question the system more than ever. Today, the political conflict is played out in the realm of technology.

Question: What impact does it have on the media?

Answer: In this environment, the problem isn’t that people believe a lie, but that they don’t know what is a lie and what is truth. Quality journalism is very important to provide context and to try to understand the world. I think it can help people gain more knowledge, even if Artificial Intelligence is used for some tasks in the profession. Philosophers, journalists, educators, we have to provide the tools to interpret the world, because when knowledge is lacking and confusion reigns, it’s easier for a leader to come with a simple, populist solution, as has already happened in some countries in Europe.

Question: Can technology make governments more technocratic?

Answer: Politicians are confused, they feel the pressure from lobbies and create regulatory frameworks, but at no point have citizens had a say. States are becoming more and more bureaucratic because they give power to those who control artificial intelligence. So, who is responsible? This kind of system, as Hannah Arendt said, leads to horrors. We must fight against it, with regulations that allow us to see why algorithms make the decisions they do and that allow us to know who is responsible.


Future Laboratory Analysis Team. Article/Report by Josep Cata Figuls.

Artificial Intelligence XIII China Regulation Laboratory

Artificial Intelligence XIII China Regulation Laboratory

THE SWIFT REGULATION OF ARTIFICIAL INTELLIGENCE IN THE PEOPLE’S REPUBLIC OF CHINA

The draft regulation as a mix of sensible restrictions on the risks of AI and a continuation of the strong tradition of Chinese government intervention in the tech industry.

In April (2023), there was a significant development in the Artificial Intelligence space in China. The Chinese internet regulator published a draft regulation on generative Artificial Intelligence. Named “Measures for the Management of Generative Artificial Intelligence Services,” the document does not call out any specific company, but the way it is written makes it clear that it was inspired by the relentless launch of large language model chatbots in China and the United States.

Last week, I participated in the CBC News podcast “Nothing Is Foreign” to discuss the draft regulation and what it means for the Chinese government to take such rapid action on a technology that is still very new.

As I said on the podcast, I see the draft regulation as a mix of sensible restrictions on the risks of AI and a continuation of the strong tradition of Chinese government intervention in the tech industry.

Many of the clauses in the draft regulation are principles that AI critics in the West advocate for: the data used to train generative AI models must not infringe on intellectual property or privacy; algorithms must not discriminate against users based on race, ethnicity, age, gender, and other attributes; AI companies must be transparent about how they obtained the training data and how they hired humans to label the data.

At the same time, there are rules that other countries would likely reject. The government requires people using these generative AI tools to register with their real identity, just as they would on any social platform in China. The content generated by AI software must also “reflect the fundamental values of socialism.”

None of these requirements are surprising. The Chinese government has tightly regulated tech companies in recent years, punishing platforms for lax moderation and incorporating new products into the established censorship regime.

The document makes that regulatory tradition easy to see: there are frequent mentions of other rules that have been passed in China regarding personal data, algorithms, deepfakes, cybersecurity, etc. In a way, it feels as if these discrete documents are slowly forming a network of rules that help the government address new challenges in the technological era.

The fact that the Chinese government can react so quickly to a new technological phenomenon is a double-edged sword. The strength of this approach, which examines each new technological trend separately, “is its precision, creating specific remedies for specific problems,” wrote Matt Sheehan, a fellow at the Carnegie Endowment for International Peace. “The weakness is its fragmented nature, with regulators forced to draft new regulations for new applications or problems.” If the government is busy playing whack-a-mole with new rules, it could miss the opportunity to think strategically about a long-term vision for AI. We can contrast this approach with that of the EU, which has been working on a “hugely ambitious” AI Act for years, as my colleague Melissa recently explained. (A recent review of the AI Act draft included regulations on generative AI).

There’s one point I didn’t mention on the podcast but find fascinating. Despite the restrictive nature of the document, it is also a tacit encouragement for companies to continue working on AI. The proposed maximum fine for violating the rules is 100,000 RMB, around 15,000 dollars, a tiny amount for any company capable of building large language models.

Of course, if a company is fined every time its AI model violates the rules, the amounts could add up. But the size of the fine suggests that the rules are not meant to scare companies into not investing in AI. As Angela Zhang, a law professor at the University of Hong Kong, recently wrote, the government is playing multiple roles: “The Chinese government should not only be seen as a regulator but also as an advocate, sponsor, and investor in AI. The ministries that advocate for the development of AI, along with state sponsors and investors, are prepared to become a powerful counterbalance to strict AI regulation.”

It may take a few months before regulators finalize the draft, and months more before it comes into effect. But I know that many people, including myself, will be watching for any changes.

Who knows? By the time the regulation comes into effect, there might be another new viral AI product that forces the government to come up with even more rules.


Analysis Team of the Future Laboratory / MIT Publication – Massachusetts Institute of Technology (United States). Article by Zeyi Yang.

Translation from English: Translation and Interpretation Team of the Future Laboratory.

Artificial Intelligence XI How the World Should Respond to the Artificial Intelligence Revolution

Artificial Intelligence XI How the World Should Respond to the Artificial Intelligence Revolution

Pausing AI developments is not enough. We need to shut it all down.

Yudkowsky is a U.S. decision theorist and leads research at the Machine Intelligence Research Institute. He has been working on Artificial General Intelligence alignment since 2001 and is widely regarded as one of the founders of the field.

An open letter published at the end of May 2023 calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped forward and signed it. It is a marginal improvement.

I refrained from signing because I believe the letter underestimates the severity of the situation and asks very little to resolve it.

The key issue is not “human-level intelligence” (as the open letter says); it’s what happens after AI reaches an intelligence greater than human. The key thresholds there may not be obvious, we certainly cannot calculate in advance what will happen and when, and it currently seems imaginable that a research lab could cross critical lines unknowingly.

Many researchers immersed in these topics, including myself, expect that the most likely outcome of building a superhumanly intelligent AI, under any remotely similar circumstances to the current ones, is that literally everyone on Earth will die. Not like “maybe possibly some remote chance,” but like “that’s the obvious thing that would happen.” It’s not that you can’t, in principle, survive by creating something much smarter than you; it’s that it would require precision and preparation and new scientific knowledge, and probably not having AI systems made up of giant, inscrutable sets of fractional numbers.

Without that precision and preparation, the most likely outcome is an AI that doesn’t do what we want and doesn’t care about us or sentient life in general. That kind of care is something that, in principle, could be imbued into an AI, but we are not ready and currently do not know how to do it.

In the absence of that care, we get “AI doesn’t love you, nor does it hate you, and you are made of atoms it can use for something else.”

The likely outcome of humanity facing an opposing superhuman intelligence is total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15,” “the 11th century trying to fight against the 21st century,” and “Australopithecus trying to fight against Homo sapiens.”

To visualize a hostile superhuman AI, don’t imagine a lifeless, intelligent thinker living inside the Internet and sending malicious emails. Imagine an entire alien civilization, thinking millions of times faster than humans, initially confined to computers, in a world of creatures that are, from its perspective, very stupid and slow. A sufficiently intelligent AI will not stay confined to computers for long. In today’s world, it can send DNA strands through email to laboratories that will produce proteins on demand, enabling an AI initially confined to the Internet to build artificial life forms or directly start post-biological molecular manufacturing.

If someone builds an AI that is too powerful, under the current conditions, I expect that all human members of the species and all biological life on Earth will die soon after.

There is no proposed plan for how we could do such a thing and survive. OpenAI’s openly declared intent is to have some future AI do our AI alignment task. Just hearing that this is the plan should be enough to make any sensible person panic. The other leading AI lab, DeepMind, has no plan.

A side note: none of this danger depends on whether AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize and calculate results that meet sufficiently complicated outcome criteria. That said, it would be negligent in my moral duties as a human if I didn’t also mention that we have no idea how to determine if AI systems are self-aware, as we have no idea how to decode anything that happens in the giant inscrutable matrices, and therefore, at some point, unknowingly, we may create digital minds that are truly self-aware and should have rights and should not be property.

The rule that most people aware of these problems would have backed 50 years ago was that if an AI system can speak fluently and says it is self-aware and demands human rights, that should be a barrier to people simply owning that AI and using it beyond that point. We’ve already passed that old line in the sand. And that was probably correct. I agree that current AIs are probably just imitating conversation about self-awareness from their training data. But I point out that, with the little understanding we have of the internal parts of these systems, we actually don’t know.

If that’s our state of ignorance for GPT-4, and GPT-5 is the same giant leap in capability as GPT-3 to GPT-4, I think we will no longer be able to justifiably say “probably not self-aware” if we allow people to build GPT-5. It will just be “I don’t know; no one knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because not being sure means you have no idea what you’re doing. And that’s dangerous, and you should stop.

On February 7, Satya Nadella, CEO of Microsoft, publicly boasted that the new Bing would make Google “come out and show it can dance.” “I want people to know we made them dance,” he said.

This is not how a Microsoft CEO speaks in a sane world. It shows an overwhelming gap between the seriousness with which we take the problem and the seriousness with which we needed to take it 30 years ago.

We’re not going to close that gap in six months.

More than 60 years passed from when the notion of artificial intelligence was first proposed and studied until we reached current capabilities. Solving the safety of superhuman intelligence, not perfect safety, safety in the sense of “not literally killing everyone,” could reasonably take at least half of that time. And what’s tricky about trying this with superhuman intelligence is that if you mess up the first attempt, you can’t learn from your mistakes because you’re dead. Humanity doesn’t learn from error and dusts itself off and tries again, as with other challenges we’ve overcome in our history, because we’re all gone.

Trying to do something right on the first truly critical attempt is an extraordinary task, both in science and engineering. We’re not coming in with anything like the approach that would be required to do it successfully. If we applied the lesser engineering rigor standards that apply to a bridge designed to carry a couple thousand cars to the emerging field of Artificial General Intelligence, the whole field would be shut down tomorrow.

We are not prepared. We are not on track to be prepared in a reasonable time window. There is no plan. Progress in AI capabilities is massive, far ahead of the progress in AI alignment or even understanding what the hell is going on inside those systems. If we really do this, we are all going to die.

Many researchers working on these systems think we are rushing toward a catastrophe, and more of them dare to say it privately than publicly; but they think they can’t unilaterally stop the forward fall, that others will continue even if they personally quit their jobs. And so everyone thinks they might as well keep going too. This is a stupid state of affairs and an undignified way for Earth to die, and the rest of humanity should intervene at this point and help the industry solve its collective action problem.

Some of my friends have recently informed me that when people outside the AI industry first hear about the extinction risk of Artificial General Intelligence, their reaction is “maybe we shouldn’t build AGI then.”

Hearing this gave me a small flicker of hope, because it is a simpler, more sensible, and frankly more reasonable reaction than what I’ve heard in the last 20 years of trying to get someone in the industry to take things seriously. Anyone who speaks like this deserves to hear how serious the situation actually is, and not be told that a six-month moratorium will solve it.

On March 16, my partner sent me this email. (Later they gave me permission to share it here).

“Nina lost a tooth! The usual way kids do it, not by carelessness! Seeing GPT-4 pass those standardized tests on the same day Nina reached a childhood milestone triggered an emotional wave that made me lose my head for a minute. Everything is moving too fast. I’m worried that sharing this might increase your own pain, but I’d rather you know than each of us suffer alone.”

When the internal conversation is about the pain of seeing your child lose their first tooth and thinking they won’t get the chance to grow up, I think we are past the point of playing political chess over a six-month moratorium.

If there were a plan for Earth to survive, if we just approved a six-month moratorium, I would support that plan. There is no such plan.

This is what would actually need to be done:

The moratorium on new large-scale training must be indefinite and global. There can be no exceptions, even for governments or the military. If the policy starts with the US, then China must see that the US is not seeking an advantage but is trying to avoid a terribly dangerous technology that cannot have a true owner and will kill everyone in the US, China, and Earth. If I had infinite freedom to write laws, I might make a single exception for AI to be trained solely to solve problems in biology and biotechnology, not trained on Internet text, and not at the level where they begin to talk or plan; but if that were remotely complicating the problem, I would immediately discard that proposal and say to simply shut everything down.

Turn off all large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all large training races. Put a limit on the amount of computing power anyone can use to train an AI system and move it downward over the coming years to offset the more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent prohibited activities from shifting to other places. Track all GPUs sold. If intelligence says a country outside the agreement is building a GPU cluster, be less afraid of a shooting conflict between nations than of violating the moratorium; be willing to destroy a rogue data center through an airstrike.

Do not frame anything as a conflict between national interests, make it clear that anyone talking about an arms race is a fool. That we all live or die as one, in this, is not policy but a fact of nature. Make explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a total nuclear exchange, and that allied nuclear countries are willing to take some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training races.

That is the kind of policy shift that would make my partner and I hug and tell each other that a miracle happened, and now there’s a chance Nina will live. Sane people who hear about this for the first time and sensibly say “maybe we shouldn’t” deserve to hear, honestly, what would be needed for that to happen. And when their policy request is that big, the only way it will pass is if lawmakers realize that if they do business as usual and do what’s politically easy, that means their own children are going to die too.

We are not ready. We are not on track to be significantly more prepared in the foreseeable future. If we proceed with this, everyone will die, including the children who didn’t choose this and did nothing wrong.


Future Lab Analysis Team / Time Magazine (England). Article by Eliezer Yudkowsky.

Translation from English: Translation and Interpretation Team of the Future Lab.

How should the world respond to the Artificial Intelligence Revolution?

How should the world respond to the Artificial Intelligence Revolution?

Bremmer is a foreign affairs columnist and editor-at-large of TIME. He is the president of Eurasia Group, a political risk consultancy, and of GZERO Media, a company focused on providing smart and engaging coverage of international affairs. He teaches applied geopolitics at the School of International and Public Affairs at Columbia University, and his most recent book is The Power of Crisis.

The growing development of artificial intelligence will produce medical breakthroughs that will save and improve billions of lives. It will become the most powerful engine of prosperity in history. It will give an incalculable number of people, including generations yet to be born, powerful tools that their ancestors never imagined. But the risks and challenges posed by Artificial Intelligence are also becoming clear, and now is the time to understand and address them. Here are the biggest ones.

The health of democracy and free markets depends on access to accurate and verifiable information. In recent years, social media has made it harder to distinguish fact from fiction, but advances in AI will unleash legions of bots that seem much more human than those we’ve encountered to date. Much more sophisticated deepfakes of audio and video will undermine our (already diminished) trust in those who serve in government and those who report the news. In China, and later in its client states, AI will take facial recognition and other tools that can be used for state surveillance to exponentially higher levels of sophistication.

This problem extends beyond our institutions because the production of “generative AI,” artificial intelligence that generates sophisticated written, visual, and other types of content in response to user prompts, is not limited to large tech companies. Anyone with a laptop and basic programming skills already has access to much more powerful AI models than existed just a few months ago and can produce unprecedented volumes of content. This proliferation challenge is about to grow exponentially as millions of people will have their own GPT running on real-time data available on the Internet. The AI revolution will enable criminals, terrorists, and other wrongdoers to code malware, create biological weapons, manipulate financial markets, and distort public opinion with astonishing ease.

Artificial intelligence can also exacerbate inequality, both within societies among small groups with wealth, access, or special skills, and between wealthier and poorer nations.

Artificial Intelligence will create disruption in the workforce. While technological advances in the past have mainly created more jobs than they have eliminated and increased productivity and prosperity in general, there are crucial caveats. Jobs created by major technological changes in the workplace require skill sets different from those they have destroyed, and the transition is never easy. Workers need to be retrained. Those who cannot retrain must be protected by a social safety net that varies in strength from place to place. Both issues are costly, and it will never be easy for governments and private companies to agree on how to share this burden.

More fundamentally, the displacement caused by Artificial Intelligence will happen on a much broader and much faster scale than past transitions. The disruption of the transition will generate economic and therefore political upheaval worldwide.

Finally, the AI revolution will also impose an emotional and spiritual cost. Humans are social animals. We thrive on interaction with others and wither in isolation. Too often, bots will replace humans as companions for many people, and by the time scientists and doctors understand the long-term impact of this trend, our growing dependence on artificial intelligence, even for companionship, may be irreversible. This may be the most important challenge of AI.

Challenges like these will require a global response. Today, artificial intelligence is not regulated by government officials but by tech companies. The reason is simple: you cannot create rules for a game you do not understand. But relying on tech companies to regulate their products is not a sustainable plan. They primarily exist to make a profit, not to protect consumers, nations, or the planet. It’s a bit like letting energy companies lead strategies to combat climate change, except that warming and its dangers are already understood in ways that the risks of AI are not, leaving us without lobbying groups that can help push for the adoption of smart and healthy policies.

So, where are the solutions? We will need national action, global cooperation, and some common-sense cooperation from the governments of the United States and China, in particular.

It will always be easier to achieve well-coordinated policy within national governments than at the international level, but political leaders have their own priorities. In Washington, policymakers have focused mainly on winning a race with China to develop the technological products that best support 21st-century security and prosperity, and this has encouraged them to give tech companies serving the national interest something akin to freedom. Chinese lawmakers, fearful that AI tools could undermine their political authority, have regulated much more aggressively. European lawmakers have focused less on security or profits and more on the social impact of AI advances.

But all will have to establish rules in the coming years that limit AI bots’ ability to undermine political institutions, financial markets, and national security. This means identifying and tracking bad actors, as well as helping people separate real information from false. Unfortunately, these are big, costly, and complicated steps that lawmakers are unlikely to take until they face AI-generated (but real) crises. That cannot happen until the discussion and debate on these issues begin.

Unlike climate change, the world’s governments have yet to agree that the AI revolution poses an existential cross-border challenge. Here, the United Nations has a role to play as the only institution with the convening power to develop a global consensus. A UN-led AI approach will never be the most efficient answer, but it will help achieve a consensus on the nature of the problem and marshal international resources.

By forging an agreement on which risks are most likely, most impactful, and emerging most rapidly, an AI-focused equivalent of the Intergovernmental Panel on Climate Change can regulate meetings and the production of agreements on the “State of AI” that increasingly delve into the heart of AI-related threats. Just like with climate change, this process will also need to include the participation of public policy officials, scientists, technologists, private sector delegates, and individual activists representing the majority of member states to create a COP (Conference of the Parties) process to address threats to biosafety, information freedom, workforce health, etc. There could also be an artificial intelligence agency inspired by the International Atomic Energy Agency to help monitor AI proliferation.

That said, there is no way to address the rapidly metastasizing risks created by the AI revolution without a much-needed infusion of common sense into U.S.-China relations. After all, it is the technological competition between the two countries and their major tech companies that creates the greatest risk of war, especially as AI plays an increasingly larger role in weapons and military planning.

Beijing and Washington must develop and maintain high-level discussions about the emerging threats to both countries (and the world) and the best ways to contain them. And they cannot wait for an AI version of the Cuban Missile Crisis to force them into genuine transparency in handling their competition. To create an “AI arms control agreement” with mutual monitoring and verification, each government must listen not only to each other but also to the technologists on both sides who understand the risks that need to be contained.

Crazy? Absolutely. The timing is terrible because these advancements are coming at a time of intense competition between two powerful countries that truly do not trust each other.

But if the Americans and Soviets were able to build a functioning arms control infrastructure in the 1970s and 1980s, the U.S. and China can build an equivalent for the 21st century. Let’s hope they realize they have no other option before a catastrophe makes it inevitably obvious.


Team of the Future Lab Analysis / Time Magazine (England). Article by Ian Bremmer. Translation from English: Translation and Interpretation Team of the Future Lab.

Artificial Intelligence IX – Brazilian Regulation

Artificial Intelligence IX – Brazilian Regulation

Bill on Artificial Intelligence Regulation of the Chamber of Deputies of the Federative Republic of Brazil.

For most members of the Chamber of Deputies of the Federative Republic of Brazil, the artificial intelligence framework will encourage technological development. It is important to highlight that the country is the first in Latin America to address this issue. The Bill has received partial approval, awaiting treatment and approval by the Senate. The initiative was presented under the administration of President Jair Mesias Bolsonaro. Subsequently, as we must report, progress has been made in the presentation of projects that contribute to an improved complement of what has already been discussed, even in the Federal Senate.

For Deputies, the artificial intelligence framework will encourage technological development.

The majority of Deputies assessed that the definition of principles for the application of Artificial Intelligence in Brazil, a topic of Bill 21/20, will encourage the country’s technological development. The text was approved in the plenary of the Chamber of Deputies and, following constitutional procedures, will be forwarded to the Senate for consideration.

The rapporteur of the Bill, Deputy Luisa Canziani (PTB from the State of Paraná), stated that she limited the text and the guidelines to be used by the public authorities when regulating the application of artificial intelligence, in order to avoid creating rules that would discourage its adoption. She reminded that some states are already creating their own rules, which is why it is necessary to establish national legislation on the matter. “We took the best from international experiences in regulating artificial intelligence when drafting this text. If we do not approve this matter, we will inhibit investments related to innovation and intelligence.”

The author of the proposal, Deputy Eduardo Bismarck (PDT, from the State of Ceará), stated that the approval of a legal framework for the sector signals to the world that Brazil is paying attention to innovation and artificial intelligence. “Artificial intelligence is already part of our reality, and Brazil will make other laws in the future. The time is now, and it is time to outline principles: rights, duties, and responsibilities.”

The proposal was criticized by Deputy Leo de Brito (PT- State of Acre), who requested more specific rules. After including issues such as state responsibility in the text, an agreement was made in favor of the text. According to Deputy de Brito, “we were addressed on some fundamental issues, so we withdrew our opposition.”

For Deputy Paulo Ganime (Novo- State of Rio de Janeiro), the project is “just right.” “In this case, the framework is intended to promote technological development, the evolution of artificial intelligence in Brazil, the generation of employment and work, and greater legal security for a sector that is still in the process of development and where Brazil can become a pioneer.”

Deputy Eduardo Cury (PSDB- State of São Paulo) emphasized that the proposal is the starting point for the regulation of the issue. “The project is correctly adjusted, with the beginning of regulation that does not go into so much detail as to inhibit innovation.”

Final Draft of the Bill Presented:

The final draft of the Bill approved by the Chamber of Deputies is presented below, in its original language, to more faithfully respect its content.

final draft

bill no. 21-A of 2020

Establishes the foundations, principles, and guidelines for the development and application of artificial intelligence in Brazil; and other provisions.

THE NATIONAL CONGRESS decrees:

Art. 1º This Law establishes the foundations and principles for the development and application of artificial intelligence in Brazil, as well as guidelines for promoting and acting in this area by the public authorities.

Art. 2º For the purposes of this Law, an artificial intelligence system is considered to be a system based on a computational process that, from a set of objectives defined by humans, can, through the processing of data and information, learn to perceive and interpret the external environment, as well as interact with it, making predictions, recommendations, classifications, or decisions, and that uses, without being limited to, techniques such as:

I – Machine learning systems, including supervised, unsupervised, and reinforcement learning;

II – Systems based on knowledge or logic;

III – Statistical approaches, Bayesian inference, research, and optimization methods.

Sole paragraph. This Law does not apply to automation processes exclusively guided by predefined programming parameters that do not include the system’s ability to learn to perceive and interpret the external environment, as well as interact with it, based on actions and information received.

Art. 3º The application of artificial intelligence in Brazil aims at scientific and technological development, as well as:

Continuation of the approved Bill drafting:

Art. 3º (continuation):
The application of artificial intelligence in Brazil aims to:

I – Promote sustainable and inclusive economic development and the well-being of society;
II – Increase Brazilian competitiveness and productivity;
III – The competitive integration of Brazil into global value chains;
IV – Improvement in the provision of public services and the implementation of public policies;
V – Promotion of research and development to stimulate innovation in productive sectors;
VI – Protection and preservation of the environment.

Art. 4º:
The development and application of artificial intelligence in Brazil are based on the following principles:

I – Scientific, technological development, and innovation;
II – Free initiative and free competition;
III – Respect for ethics, human rights, and democratic values;
IV – Free expression of thought and free expression of intellectual, artistic, scientific, and communication activity;
V – Non-discrimination, plurality, respect for regional diversity, inclusion, and respect for the fundamental rights and guarantees of citizens;
VI – Recognition of its digital, transversal, and dynamic nature;
VII – Encouragement of self-regulation through the adoption of codes of conduct and good practice guides, in accordance with the principles established in Article 5 of this Law and with global best practices;
VIII – Security, privacy, and protection of personal data;
IX – Information security;
X – Access to information;
XI – National defense, state security, and national sovereignty;
XII – Freedom in business models, as long as it does not conflict with the provisions of this Law;
XIII – Preservation of the stability, security, resilience, and functionality of artificial intelligence systems, through the adoption of technical measures compatible with international standards and promoting best practices;
XIV – Protection of free competition and against abusive market practices, according to Law No. 12,529 of November 30, 2011; and
XV – Harmonization with Laws No. 13,709 of August 14, 2018 (General Data Protection Law), 12,965 of April 23, 2014, 12,529 of November 30, 2011, 8,078 of September 11, 1990 (Consumer Protection Code), and 12,527 of November 18, 2011.

Sole paragraph: The codes of conduct and good practice guides mentioned in item VII of the head of this article may serve as indicative elements of compliance.

Art. 5º:
The principles for the development and application of artificial intelligence in Brazil are as follows:

I – Beneficial purpose: seeking beneficial outcomes for humanity through artificial intelligence systems;
II – Human-centeredness: respect for human dignity, privacy, personal data protection, and fundamental rights when the system deals with matters related to human beings;
III – Non-discrimination: mitigation of the possibility of using systems for discriminatory, unlawful, or abusive purposes;
IV – Pursuit of neutrality: recommending that the agents involved in the development and operation of artificial intelligence systems seek to identify and mitigate biases contrary to current legislation;
V – Transparency: the right of individuals to be clearly, accessibly, and accurately informed about the use of artificial intelligence solutions, unless otherwise provided by law and respecting trade and industrial secrets, in the following cases:
a) When interacting directly with artificial intelligence systems, such as in the case of chatbots for personalized online service;
b) About the identity of the natural person operating the system autonomously, or the legal entity responsible for the operation of the artificial intelligence system;
c) About the general criteria guiding the operation of the artificial intelligence system, respecting trade and industrial secrets when there is a significant risk to fundamental rights.
VI – Security and prevention: using technical, organizational, and administrative measures compatible with best practices, international standards, and economic feasibility, to allow for the management and mitigation of risks arising from the operation of artificial intelligence systems throughout their lifecycle and continuous operation;
VII – Responsible innovation: ensuring the adoption of this Law’s provisions by the agents operating in the development and operation of artificial intelligence systems, documenting their internal processes, and assuming responsibility for the outcomes of such systems;
VIII – Data availability: no violation of copyright in the use of data, databases, and texts protected for training artificial intelligence systems, as long as it does not affect the normal exploitation of the work by its owner.

Art. 6º:
The public authorities, when regulating the application of artificial intelligence, must observe the following guidelines:

I – Subsidiary intervention: specific rules should only be developed when absolutely necessary to ensure compliance with current legislation;
II – Sectoral action: the public authorities’ actions should be carried out through the competent body or entity, considering the context and regulatory framework of each sector;
III – Risk-based management: the development and use of artificial intelligence systems must consider concrete risks. Definitions on the need for regulation and the level of intervention should be proportional to the concrete risks presented by each system and the probability of these occurring, always in comparison with:
a) The social and economic benefits the artificial intelligence system offers;
b) The risks presented by similar systems that do not involve artificial intelligence;
IV – Social and interdisciplinary participation: the adoption of rules affecting the development and operation of artificial intelligence systems will be evidence-based and preceded by public consultation, preferably online, with broad prior disclosure;
V – Regulatory impact analysis: before adopting rules affecting the development and operation of artificial intelligence systems, a regulatory impact analysis should be conducted, according to Decree No. 10,411 of June 30, 2020, and Law No. 13,874 of September 20, 2019;
VI – Responsibility: the rules regarding the responsibility of the agents operating in the development and operation of artificial intelligence systems must be based on subjective responsibility and consider the effective participation of the agents, specific harms to be avoided or remedied, and how these agents can demonstrate their compliance with the applicable standards.

§ 1º: In the risk-based management mentioned in item III, in cases of low risk, responsible innovation will be encouraged through the use of flexible regulatory techniques.

§ 2º: In the risk-based management mentioned in item III, when high risk is identified, the public administration may, within its competence, request information on the security and prevention measures listed in item VI of Art. 5 and their respective safeguards, respecting the transparency limits established by this Law.

§ 3º: When the use of the artificial intelligence system involves consumer relations, the agent will be responsible for repairing the damages caused to consumers, within the limit of their effective participation in the damage, according to Law No. 8,078 of September 11, 1990 (Consumer Protection Code).

§ 4º: Legal entities under public and private law providing public services will be responsible for damages caused by their agents in that capacity, with the right of recovery against the responsible party in cases of fraud or negligence.

Art. 7º
The guidelines for the actions of the Union, the States, the Federal District, and the Municipalities regarding the use and promotion of artificial intelligence systems in Brazil are as follows:

I – Promotion of trust in artificial intelligence technologies, through the dissemination of information and knowledge about their ethical and responsible uses;
II – Encouragement of investments in artificial intelligence research and development;
III – Promotion of the technological interoperability of artificial intelligence systems used by the public administration, in order to allow information exchange and streamline procedures;
IV – Encourage the development and adoption of artificial intelligence systems in the public and private sectors;
V – Stimulate the training and preparation of individuals for the restructuring of the labor market;
VI – Promote innovative pedagogical practices, with a multidisciplinary perspective and an emphasis on the importance of redefining teacher training processes to address the challenges derived from the insertion of artificial intelligence as a pedagogical tool in the classroom;
VII – Stimulate the adoption of regulatory instruments that foster innovation, such as regulatory sandbox environments, regulatory impact analysis, and sectoral self-regulation;
VIII – Encourage the creation of mechanisms for transparent and collaborative governance, with the participation of public authorities, the business sector, civil society, and the scientific community;
IX – Promotion of international cooperation, by encouraging the exchange of knowledge about artificial intelligence systems and the negotiation of treaties, agreements, and global technical standards that facilitate the interoperability between systems and the harmonization of legislation on this subject.

Sole paragraph: For the purposes of this article, the federal public administration will promote strategic management and guidance on the transparent and ethical use of artificial intelligence systems in the public sector, in accordance with strategic public policies for the sector.

Art. 8º
The guidelines established in Articles 6 and 7 of this Law will be applied according to the regulations of the federal Executive Power, through sectoral bodies and entities with technical competence in the matter, which must:

I – Monitor the risk management of artificial intelligence systems, in the specific case, evaluating the risks of their application and the mitigation measures within their area of competence;
II – Establish rights, duties, and responsibilities;
III – Recognize self-regulation institutions.

Art. 9º
For the purposes of this Law, artificial intelligence systems are technological representations derived from the field of computer science and computing, and it is exclusively the responsibility of the Union to legislate and regulate the matter in order to promote legal uniformity throughout the national territory, in accordance with item IV of the caput of Art. 22 of the Federal Constitution.

Art. 10
This Law comes into force 90 (ninety) days after its official publication.

Session Room, on September 29, 2021.

Deputy LUISA CANZIANI
Rapporteur


Sources

Analysis team of the Future Laboratory / Information System of the Chamber of Deputies of the Federative Republic of Brazil. With the collaboration of the Chamber of News Agency.

Translation in the central elements, translation team of the Future Laboratory.

error: Content is protected !!