The European Union’s Artificial Intelligence Regulation Law
The European Union, after listening to the opinions of various organizations, has developed the so-called Artificial Intelligence Law, which has become the beginning of regulation in this complex topic. This was done after consulting specialists, organizations, and universities. It is a very extensive subject, so we make both the law and the different opinions expressed by the specialists available to readers. This should be considered very important, since – with the exception of the Chamber of Deputies of the Federative Republic of Brazil – regional governments have not yet addressed a matter of such importance.
What is the EU AI Law?
The AI Law is a proposed European law on artificial intelligence (AI), the first such law from a major regulator anywhere in the world. The law assigns AI applications to three categories of risk. First, applications and systems that create an unacceptable risk, such as government-managed social scoring like that used in the People’s Republic of China, are prohibited.
Second, high-risk applications, such as a CV scanning tool that classifies job applicants, are subject to specific legal requirements.
Finally, applications that are not explicitly prohibited or categorized as high-risk are largely left unregulated.
Consult the EU AI Law via this link.
Why should we care?
Artificial intelligence applications influence the information you see online by predicting what content you are drawn to, capture and analyze facial data to enforce laws or personalize advertisements, and are used to diagnose and treat cancer, for example. In other words, AI affects many parts of people’s lives.
Just like the European Union’s General Data Protection Regulation (GDPR) in 2018, the EU AI Law could become a global standard, determining the extent to which AI has a positive rather than negative effect on people’s lives wherever they are. The EU AI regulation is already causing a stir internationally. By the end of September 2021, Brazil’s Congress approved a bill creating a legal framework for artificial intelligence. It still needs to pass through the country’s Senate.
Can the regulation be improved?
There are several gaps and exceptions in the proposed law. These deficiencies limit the law’s ability to ensure AI remains a force for good in people’s lives. Currently, for example, facial recognition by the police is prohibited unless the images are captured with a delay or the technology is being used to find missing children.
Moreover, the law is inflexible. If, in two years, a dangerous AI application is used in an unforeseen sector, the law does not provide any mechanism to label it as “high-risk.”
More Detailed Analyses:
This section includes a handful of analyses of the AI Law, among many hundreds, which we have selected. We have chosen these analyses because, in our opinion, they contain constructive ideas and invite reflection on how to improve the law.
Future of Life Institute:
The Future of Life Institute (FLI), an independent nonprofit organization aimed at maximizing the benefits of technology and minimizing its associated risks, shared its recommendations for the EU AI Law with the European Commission. They argue that the law should ensure AI providers consider the impact of their applications on society as a whole, not just on the individual. AI applications that cause minimal harm to individuals could cause significant harm at the societal level. For example, a marketing application used to influence citizens’ electoral behavior could affect election outcomes. Read more of the recommendations at the following link here.
University of Cambridge Institutions:
The Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk, two leading institutions at the University of Cambridge, provided their feedback on the EU AI law proposal to the European Commission. They hope the law will help establish international standards to enable the benefits and reduce the risks of AI. One of their recommendations is to allow changes to be proposed to the list of restricted and high-risk systems, increasing the flexibility of the regulation. Read the full reasoning here.
Access Now Europe:
Access Now, an organization that advocates for and extends the digital rights of at-risk users, has also provided feedback on the EU AI Law. They are concerned that the law, in its current form, fails to achieve the goal of protecting fundamental rights. Specifically, they do not believe the proposal goes far enough to protect fundamental rights concerning biometric applications such as emotion recognition and AI polygraphs. The current draft of the AI Law calls for transparency obligations for these applications, but Access Now recommends stricter measures to reduce all associated risks, such as bans. Read their concrete suggestions here.
Michael Veale and Frederik Zuiderveen Borgesius:
Michael Veale, Assistant Professor at University College London in Digital Rights and Regulation, and Frederik Zuiderveen Borgesius, Professor of ICT and Private Law at Dutch Radboud University, provide a comprehensive analysis of some of the most sensitive parts of the EU AI Law. One of the many surprising ideas in their article is that compliance with the law would almost entirely depend on self-assessment. Self-assessment means there is no enforcement to comply with the law. Once standardization bodies like CEN and CENELEC publish their standards, third-party verification under the law will no longer be necessary. The full article can be found here.
The Future Society:
The Future Society, a nonprofit organization based in Estonia advocating for the responsible adoption of AI for the benefit of humanity, sent its comments to the European Commission on the EU AI Law. One of their suggestions is to ensure governance continues to respond to technological trends. This could be achieved by improving the flow of information between national and European institutions and systematically compiling and analyzing incident reports from member states. Read the full comments here.
Nathalie A. Smuha and colleagues:
Nathalie A. Smuha, researcher at the KU Leuven Faculty of Law, Emma Ahmed-Rengers, PhD researcher in Law and Informatics at the University of Birmingham, and their colleagues argue that the EU AI Law does not always accurately recognize the errors and harms associated with different types of AI systems nor assign responsibility appropriately. They also claim the proposal does not provide an effective framework for enforcing legal rights and duties. The proposal does not ensure meaningful transparency, accountability, and public participation rights. Read the full article here.
The European DIGITAL SME Alliance:
The European DIGITAL SME Alliance, a network of small and medium-sized ICT businesses in Europe, welcomes harmonized AI regulation and focuses on ethical AI in the EU but suggests many improvements to avoid overburdening SMEs. For example, they argue that whenever compliance assessments are based on standards, SMEs should actively participate in the development of these standards. Otherwise, the standards may be drafted in ways that are impractical for SMEs. Many other recommendations can be read here.
The Cost of the EU AI Law:
The Centre for Data Innovation, a nonprofit organization focused on data-driven innovation, published a report stating that the EU AI Law will cost 31 billion euros over the next five years and will reduce AI investments by nearly 20%. Entrepreneur Meeri Haataja and academic Joanna Bryson published their own research, arguing that it will likely be much cheaper, as the regulation primarily covers a small proportion of AI applications considered high-risk. Additionally, the cost analysis does not consider all the benefits of the regulation to the public. Finally, CEPS, a think tank and forum for discussion on EU affairs, published its own analysis of the cost estimates and reached a similar conclusion to Haataja and Bryson.
Social Harm and the Law:
Nathalie Smuha distinguishes social harm from individual harm in the context of the AI Law. Social harm does not relate to the interests of any particular individual, but considers harm to society in general, beyond the sum of individual interests. She argues that the proposal remains focused almost exclusively on individual harm concerns and seems to overlook the need for protection against social harms from AI. The full paper can be read here.
The Role of Standards:
Researchers from Oxford Information Labs discuss the role the EU Artificial Intelligence Law gives to standards for AI. The key point they highlight is that compliance with harmonized standards will create a presumption of compliance for high-risk AI applications and services. This, in turn, could increase confidence that they meet the complex requirements of the proposed regulation and create strong incentives for the industry to comply with European standards. Find the extensive analysis of the role of standards in EU AI regulation here.
Analysis Team of the Future Lab/European Union Information System.
The creators and developers of ChatGPT believe that new superintelligence could surpass human experts in most disciplines within the next 10 years and are calling for supra-regulation. However, its main responsible, Sam Altman, does not find the regulation proposed by the European Union acceptable.
The creators of ChatGPT have published a note (which can be seen almost below) where they warn that in ten years, AI systems could surpass human experts in most areas. This superintelligence, they say, will be more powerful than other technologies humanity has faced in the past and poses an existential risk to our civilization. Therefore, they urge authorities to think about how to manage it. This is material to examine and, above all, to consider why the creators of the technology are theoretically so alarmed…
Sam Altman, Greg Brockman, Ilya Sutskever, three of the co-founders of OpenAI, the company behind ChatGPT, believe that future superintelligence will be even more capable than general artificial intelligence—a form of synthetic intelligence comparable to human intelligence—and will be as productive as today’s largest companies.
But, it’s important that we look at the content of that note:
The note from the three founders is from May 22, 2023, and it reads as follows: Responsible Artificial Intelligence, Safety, and Alignment. Given the current state of affairs, it is conceivable that within the next ten years, artificial intelligence systems could surpass the skill level of experts in most domains and perform as much productive activity as one of today’s largest corporations. In terms of potential advantages and disadvantages, superintelligence will be more powerful than other technologies humanity has had to deal with in the past. We could have a dramatically more prosperous future, but we must manage the risk to get there. Given the possibility of existential risk, we cannot simply be reactive. Nuclear energy is a historical example of a commonly used technology with this property; synthetic biology is another example. We must also mitigate the risks of current AI technology, but superintelligence will require special treatment and coordination. A starting point: There are many ideas that matter to us for having a good chance of successfully navigating this development; here we present our initial thoughts on three of them. First, we need some degree of coordination among the main development efforts to ensure that the development of superintelligence happens in a way that allows us to maintain safety and help the smooth integration of these systems with society. There are many ways this could be implemented: major governments worldwide could establish a project that involves many of the current efforts, or we could collectively agree (with the backing power of a new organization as suggested below) that the growth rate of AI capabilities at the frontier is limited to a certain rate per year. And, of course, individual companies must be subject to an extremely high standard of responsible conduct. Second, we will likely eventually need something like the IAEA (International Atomic Energy Agency) for superintelligence efforts; any effort above a certain capability threshold (or resources like computing) should be subject to an international authority that can inspect systems, require audits, test compliance with safety standards, impose restrictions on implementation degrees and safety levels, etc. Monitoring the use of computing and energy could be very helpful and give us some hope that this idea could be implemented. As a first step, companies could voluntarily agree to start implementing elements of what such an agency might one day require, and as a second step, individual countries could implement it. It would be important for such an agency to focus on reducing existential risk and not on issues that should be left to individual countries, such as defining what an AI should be allowed to say. Third, we need the technical ability to make superintelligence safe. This is an open research question that we and others are putting a lot of effort into. What is not within scope: We believe it is important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits). Today’s systems will create enormous value in the world and, while they carry risks, the level of those risks feels in line with other internet technologies, and the probable societal approaches seem appropriate. In contrast, the systems we are concerned about will have power beyond anything created so far, and we must be careful not to dilute the focus on them by applying similar standards to technology far below this bar. Public Opinion and Potential: However, governance of the most powerful systems, as well as decisions related to their deployment, must have strong public oversight. We believe people around the world should democratically decide the limits and default values of AI systems. We still don’t know how to design such a mechanism, but we plan to experiment with its development. We still think that, within these broad limits, individual users should have a lot of control over how the AI they use behaves. Given the risks and difficulties, it’s worth considering why we are building this technology. At OpenAI, we have two fundamental reasons. First, we believe it will lead to a much better world than we can imagine today (we’re already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces many problems that we will need much more help to solve; this technology can improve our societies, and the creative capacity of everyone to use these new tools will surely astonish us. Economic growth and improved quality of life will be astounding. Second, we believe that stopping the creation of superintelligence would be unintuitive, risky, and difficult. Because the benefits are so enormous, the cost of building it decreases each year, the number of actors building it is rapidly increasing, and it is inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that is not guaranteed to work. So we have to do it right.
La visión de OpenAI, respaldada por el millonario apoyo de Microsoft, es optimista respecto al futuro de la inteligencia artificial, considerando que las futuras generaciones de IA podrían traer consigo un mundo mucho más próspero. Sin embargo, también reconocen el gran desafío de gestionar el riesgo existencial que implican estas tecnologías, sobre todo en su etapa de superinteligencia, que podría superar las capacidades humanas en múltiples áreas.
Los creadores de ChatGPT y otros sistemas de inteligencia artificial coinciden en que, aunque la superinteligencia es una meta atractiva, detener el desarrollo en este momento sería arriesgado y difícil, dado que las ventajas potenciales son enormes y los actores involucrados en su creación se están multiplicando rápidamente. En lugar de detener su progreso, prefieren abordar el problema mediante un enfoque proactivo y controlado para minimizar riesgos y asegurar una transición segura a esta nueva era.
En cuanto a las medidas propuestas para mitigar el riesgo existencial, plantean tres puntos clave:
Desarrollo Coordinado de la Tecnología: Es esencial que los esfuerzos de desarrollo estén bien coordinados entre gobiernos y empresas para integrar la IA de forma responsable en la sociedad, asegurando que se respeten estándares mínimos de seguridad. OpenAI sugiere dos formas de hacerlo: una, liderada por gobiernos globales con participación de los desarrolladores clave; y otra, mediante un acuerdo colectivo respaldado por una nueva organización que limite el ritmo de crecimiento de la IA.
Organismo Regulador Global Similar al OIEA: El segundo paso sería crear una organización internacional para supervisar el desarrollo de la superinteligencia, similar al OIEA para la energía nuclear. Esta entidad debería supervisar cualquier esfuerzo que supere un umbral significativo de capacidad, asegurando que las tecnologías avanzadas sean seguras y no representen un riesgo existencial. De forma preliminar, OpenAI sugiere que las compañías ya podrían empezar a implementar estas medidas de forma voluntaria.
Capacidades Técnicas para la Seguridad de la Superinteligencia: Aseguran que es fundamental avanzar en la investigación para lograr una superinteligencia segura, a lo que OpenAI y otros están dedicando muchos recursos.
A pesar de estas sugerencias, OpenAI no está contenta con la regulación propuesta por la Unión Europea, que ha adoptado un enfoque más restrictivo sobre la inteligencia artificial. Sam Altman, CEO de OpenAI, ha expresado que la compañía podría incluso dejar de operar en Europa si las leyes europeas sobre IA se aplican tal y como están, pues considera que algunas de las restricciones son demasiado estrictas. La ley de la UE establece tres niveles de riesgo para las tecnologías de IA: uno que prohíbe ciertos usos (como los sistemas de puntuación social), otro que impone requisitos legales específicos, y un tercero para sistemas de IA que no se consideran de alto riesgo y que quedarían en gran medida sin regular.
Este conflicto pone de manifiesto la tensión entre la necesidad de regulación para garantizar la seguridad y la flexibilidad que las empresas tecnológicas exigen para innovar rápidamente. En este contexto, OpenAI reclama que, si bien el desarrollo de IA debe ser responsable y seguro, no debe haber restricciones que puedan frenar el progreso y los beneficios potenciales de la inteligencia artificial.
En resumen, OpenAI se enfrenta a un dilema: avanzar hacia un futuro de superinteligencia con cautela, pero sin frenar su desarrollo, mientras que al mismo tiempo intenta lidiar con las presiones regulatorias que surgen a nivel global.
The imperative need to comply with the rules and protect user data.
Once again, Mr. Zuckerberg, CEO of Meta, faces problems with European regulatory authorities regarding Facebook/Meta. Although we have already witnessed his appearances before these authorities in previous years, which have not been particularly successful or friendly, the European Union, through the competent bodies, has struck hard against the company for ignoring existing provisions. For Europeans, the issue of user data protection is not considered a minor violation of the rules. And, based on past records, it could not be stated with absolute certainty that Zuckerberg is, in fact, a man strictly concerned with these details. One only needs to remember episodes like Cambridge Analytica, which we recall in a frame at the end of this post for those who may not remember. These episodes—direct manipulation of data and people—cost the company a fortune in fines in the United States and the United Kingdom for alleged manipulation of wills in favor of Brexit (Britain’s exit from the European Union).
Now, the European Union has imposed a record fine of 1.2 billion euros on Meta, the owner of Facebook, for transferring data of European citizens to the United States, as announced by the Irish Data Protection Commission, the agency responsible for overseeing privacy regulations compliance in the EU.
The European data protection body believes that Facebook has illegally stored European citizens’ data for years on its U.S. servers. The fine is a European record in terms of privacy, surpassing the 746 million euros penalty imposed on Amazon in 2021. Additionally, the European Union gives Meta a five-month deadline to stop sending European user data to the United States and six months to delete any personal information previously transferred.
In March 2022, the European Union and the United States reached an agreement on the principles of a new framework to ensure the free transfer of personal data between the two blocs, as publicly announced by U.S. President Joseph Biden and European Commission President Ursula Von der Leyen. The transfer framework was suspended in 2020 when the Court of Justice of the European Union annulled the existing agreement, ruling that the U.S. did not guarantee the privacy of European citizens’ data.
The Irish body believes that the contractual clauses Meta uses to transfer data to the U.S. “do not address the risks to the fundamental rights and freedoms” of European Facebook users raised by this ruling.
The agreement between both blocs has not yet come into effect. Nick Clegg, Meta’s president of Global Affairs, urged the new framework to be implemented. “I’m frustrated to see that the announced agreement has not been put into action,” he said. “We need to ensure that data flow.”
The sanction increases the pressure on the U.S. government to finalize the agreement that would allow thousands of multinational companies to continue sending European user data to the United States. The final agreement could be ready as early as July, though it could be delayed until autumn.
Meta has even threatened to exit its operations in the European Union (by the way, threats are one of Meta’s preferred tools) if both blocs do not reach an agreement to allow data transfer. The previous framework was annulled by European courts after a complaint against Facebook was filed by Austrian Max Schrems, who sought to prevent the transfer of European citizens’ data to the U.S. because U.S. laws do not provide the same level of protection as European data protection regulations.
The American company has stated that it will appeal both the ruling and the fine imposed, and will request the suspension of the order before the courts. It also pointed out that there will be no immediate interruption of Facebook in Europe.
In a statement signed by Nick Clegg (an interesting figure, by the way; Mr. Zuckerberg spares no expense), the company says that it has used the contractual clauses with the conviction that this legal instrument met the requirements of the European data protection regulation. “We will appeal and request the courts suspend the application deadlines, given the harm that these orders would cause, including to the millions of people who use Facebook every day.”
As is typical, this is a daily issue—the protection of user data rights and ownership—so we will need to stay tuned for the next chapter, even though for Meta/Facebook, the situation doesn’t seem ideal in this particular case.
Facebook and the Cambridge Analytica Case: In the 2010s, the British consulting firm Cambridge Analytica collected data from millions of Facebook users without their consent, mainly for use in political propaganda. The data was obtained through an app called This Is Your Digital Life, developed by computer scientist Aleksandr Kogan and his company Global Science Research in 2013. The app consisted of a series of questions to create psychological profiles of users, and it gathered personal data from the contacts of its users via Facebook’s Open Graph platform. The app collected data from up to 87 million Facebook profiles, and Cambridge Analytica used this data to provide analytical support to the campaigns of Ted Cruz and Donald Trump for the 2016 presidential elections. Cambridge Analytica was also accused of interfering in the Brexit referendum, although the official investigation acknowledged that the company did not intervene “beyond certain initial inquiries” and that there were no “significant breaches.” Information about the misuse of data came to light thanks to Christopher Wylie, a former employee of Cambridge Analytica, in interviews with The Guardian and The New York Times. In response, Facebook apologized for its role in data collection, and its CEO, Mark Zuckerberg, had to testify before the United States Congress. In July 2019, it was announced that the Federal Trade Commission imposed a $5 billion fine on Facebook for its privacy violations. In October 2019, Facebook agreed to pay a £500,000 fine to the UK Information Commissioner’s Office for exposing its users’ data to a “serious risk of harm.” In May 2018, Cambridge Analytica declared bankruptcy in the U.S. Other advertising agencies had been implementing various forms of psychological tracking for years, and Facebook patented similar technology in 2012. However, Cambridge Analytica’s transparency about its methods and the caliber of its clients—such as Trump’s presidential campaign and the pro-Brexit campaign—raised public awareness of the issues posed by psychological tracking, a concern that scholars had been warning about for years. The scandal sparked growing public interest in privacy and the influence of social media on politics. The hashtag #DeleteFacebook trended on Twitter.
[i] Sir Nicholas William Peter Clegg, known as Nick Clegg, is a British socioliberal politician. At 32, he was elected as a Member of the European Parliament, an institution he served in until 2004. From the 2005 to 2017 general elections in the United Kingdom, he represented the Sheffield Hallam constituency in the UK Parliament. Early in his parliamentary career, he served as his party’s Home Affairs Spokesperson. Just two years after entering the House of Commons, he was elected Leader of the Liberal Democrats.
Clegg led his party in the 2010 UK general elections, in which no party achieved an absolute majority in the House of Commons. Due to this situation, the Conservative Party led by David Cameron formed a coalition government with the Liberals, and Clegg was appointed Deputy Prime Minister of the UK on May 11, 2010, the second most important title in the UK government. After the 2015 UK general elections, Clegg resigned as leader of his party, and as a result of the Conservative victory in those elections, he ceased to hold the position of Deputy Prime Minister.
On October 19, 2018, Clegg was appointed Vice President of Global Affairs and Communications at Facebook in Palo Alto after the company’s stock value dropped by 15% in 2018. He began his role in January 2019.
In May 2018, Clegg joined David Miliband and Nicky Morgan in calling for a soft Brexit. On June 23, 2018, Clegg participated in the march organized by People’s Vote in London to commemorate the second anniversary of the referendum to leave the European Union. People’s Vote is a group that called for a public vote on the final Brexit deal between the UK and the EU. In October 2018, it was announced that Clegg had been hired as the official public relations head in his role as Vice President of Global Affairs and Communications at Facebook, replacing Elliot Schrage. His annual salary is 4.5 million euros, which is sixty times what he earned as an MP.