Proposal for Artificial Intelligence Legislation EU VIII

Artificial intelligence | geopolitics | News | States and technology

June 03, 2023

3 Jun, 2023

The European Union’s Artificial Intelligence Regulation Law

The European Union, after listening to the opinions of various organizations, has developed the so-called Artificial Intelligence Law, which has become the beginning of regulation in this complex topic. This was done after consulting specialists, organizations, and universities. It is a very extensive subject, so we make both the law and the different opinions expressed by the specialists available to readers. This should be considered very important, since – with the exception of the Chamber of Deputies of the Federative Republic of Brazil – regional governments have not yet addressed a matter of such importance.

What is the EU AI Law?

The AI Law is a proposed European law on artificial intelligence (AI), the first such law from a major regulator anywhere in the world. The law assigns AI applications to three categories of risk. First, applications and systems that create an unacceptable risk, such as government-managed social scoring like that used in the People’s Republic of China, are prohibited.

Second, high-risk applications, such as a CV scanning tool that classifies job applicants, are subject to specific legal requirements.

Finally, applications that are not explicitly prohibited or categorized as high-risk are largely left unregulated.

Consult the EU AI Law via this link.

Why should we care?

Artificial intelligence applications influence the information you see online by predicting what content you are drawn to, capture and analyze facial data to enforce laws or personalize advertisements, and are used to diagnose and treat cancer, for example. In other words, AI affects many parts of people’s lives.

Just like the European Union’s General Data Protection Regulation (GDPR) in 2018, the EU AI Law could become a global standard, determining the extent to which AI has a positive rather than negative effect on people’s lives wherever they are. The EU AI regulation is already causing a stir internationally. By the end of September 2021, Brazil’s Congress approved a bill creating a legal framework for artificial intelligence. It still needs to pass through the country’s Senate.

Can the regulation be improved?

There are several gaps and exceptions in the proposed law. These deficiencies limit the law’s ability to ensure AI remains a force for good in people’s lives. Currently, for example, facial recognition by the police is prohibited unless the images are captured with a delay or the technology is being used to find missing children.

Moreover, the law is inflexible. If, in two years, a dangerous AI application is used in an unforeseen sector, the law does not provide any mechanism to label it as “high-risk.”

More Detailed Analyses:

This section includes a handful of analyses of the AI Law, among many hundreds, which we have selected. We have chosen these analyses because, in our opinion, they contain constructive ideas and invite reflection on how to improve the law.

Future of Life Institute:

The Future of Life Institute (FLI), an independent nonprofit organization aimed at maximizing the benefits of technology and minimizing its associated risks, shared its recommendations for the EU AI Law with the European Commission. They argue that the law should ensure AI providers consider the impact of their applications on society as a whole, not just on the individual. AI applications that cause minimal harm to individuals could cause significant harm at the societal level. For example, a marketing application used to influence citizens’ electoral behavior could affect election outcomes. Read more of the recommendations at the following link here.

University of Cambridge Institutions:

The Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk, two leading institutions at the University of Cambridge, provided their feedback on the EU AI law proposal to the European Commission. They hope the law will help establish international standards to enable the benefits and reduce the risks of AI. One of their recommendations is to allow changes to be proposed to the list of restricted and high-risk systems, increasing the flexibility of the regulation. Read the full reasoning here.

Access Now Europe:

Access Now, an organization that advocates for and extends the digital rights of at-risk users, has also provided feedback on the EU AI Law. They are concerned that the law, in its current form, fails to achieve the goal of protecting fundamental rights. Specifically, they do not believe the proposal goes far enough to protect fundamental rights concerning biometric applications such as emotion recognition and AI polygraphs. The current draft of the AI Law calls for transparency obligations for these applications, but Access Now recommends stricter measures to reduce all associated risks, such as bans. Read their concrete suggestions here.

Michael Veale and Frederik Zuiderveen Borgesius:

Michael Veale, Assistant Professor at University College London in Digital Rights and Regulation, and Frederik Zuiderveen Borgesius, Professor of ICT and Private Law at Dutch Radboud University, provide a comprehensive analysis of some of the most sensitive parts of the EU AI Law. One of the many surprising ideas in their article is that compliance with the law would almost entirely depend on self-assessment. Self-assessment means there is no enforcement to comply with the law. Once standardization bodies like CEN and CENELEC publish their standards, third-party verification under the law will no longer be necessary. The full article can be found here.

The Future Society:

The Future Society, a nonprofit organization based in Estonia advocating for the responsible adoption of AI for the benefit of humanity, sent its comments to the European Commission on the EU AI Law. One of their suggestions is to ensure governance continues to respond to technological trends. This could be achieved by improving the flow of information between national and European institutions and systematically compiling and analyzing incident reports from member states. Read the full comments here.

Nathalie A. Smuha and colleagues:

Nathalie A. Smuha, researcher at the KU Leuven Faculty of Law, Emma Ahmed-Rengers, PhD researcher in Law and Informatics at the University of Birmingham, and their colleagues argue that the EU AI Law does not always accurately recognize the errors and harms associated with different types of AI systems nor assign responsibility appropriately. They also claim the proposal does not provide an effective framework for enforcing legal rights and duties. The proposal does not ensure meaningful transparency, accountability, and public participation rights. Read the full article here.

The European DIGITAL SME Alliance:

The European DIGITAL SME Alliance, a network of small and medium-sized ICT businesses in Europe, welcomes harmonized AI regulation and focuses on ethical AI in the EU but suggests many improvements to avoid overburdening SMEs. For example, they argue that whenever compliance assessments are based on standards, SMEs should actively participate in the development of these standards. Otherwise, the standards may be drafted in ways that are impractical for SMEs. Many other recommendations can be read here.

The Cost of the EU AI Law:

The Centre for Data Innovation, a nonprofit organization focused on data-driven innovation, published a report stating that the EU AI Law will cost 31 billion euros over the next five years and will reduce AI investments by nearly 20%. Entrepreneur Meeri Haataja and academic Joanna Bryson published their own research, arguing that it will likely be much cheaper, as the regulation primarily covers a small proportion of AI applications considered high-risk. Additionally, the cost analysis does not consider all the benefits of the regulation to the public. Finally, CEPS, a think tank and forum for discussion on EU affairs, published its own analysis of the cost estimates and reached a similar conclusion to Haataja and Bryson.

Social Harm and the Law:

Nathalie Smuha distinguishes social harm from individual harm in the context of the AI Law. Social harm does not relate to the interests of any particular individual, but considers harm to society in general, beyond the sum of individual interests. She argues that the proposal remains focused almost exclusively on individual harm concerns and seems to overlook the need for protection against social harms from AI. The full paper can be read here.

The Role of Standards:

Researchers from Oxford Information Labs discuss the role the EU Artificial Intelligence Law gives to standards for AI. The key point they highlight is that compliance with harmonized standards will create a presumption of compliance for high-risk AI applications and services. This, in turn, could increase confidence that they meet the complex requirements of the proposed regulation and create strong incentives for the industry to comply with European standards. Find the extensive analysis of the role of standards in EU AI regulation here.


Analysis Team of the Future Lab/European Union Information System.

Autor: Laboratory of the Future analysis team

Autor: Laboratory of the Future analysis team

Related articles

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!