The European Union, which had already approved the AI Act last August, has decided to abandon the development of a new AI Liability directive, which the European Commission was theoretically going to work on in 2025. In the Commission’s 2025 Work Program, published on February 11, the institution lists the points that will guide its agenda this year and reports a total of 37 regulations it has decided to withdraw.
This new regulation, which could be seen as redundant given the already approved AI Act, was probably abandoned to strengthen Europe’s image as a competitive and innovation-friendly zone, in the context of a U.S. rival willing to follow to the letter the ‘move fast and break things’ doctrine. The law the Commission is abandoning aimed to ensure responsible AI development, respecting consumer rights, safety, and privacy.
As with most of the other discarded measures, the Commission argues it did not achieve consensus or agreement to move it forward. In the future, the institution will consider adopting a new approach to regulation or abandoning it completely. “No agreement is foreseen: the Commission will assess whether another proposal should be presented or another type of approach should be chosen,” the document states.
The withdrawal of this proposed regulation coincides with two events. On one hand, there is the rise to stardom of Le Chat, the new large language model created in France and quickly dubbed the European answer to ChatGPT. France seems to have become the European engine for artificial intelligence, having secured around €109 billion in investments in data centers and AI projects.
ChatGPT Artificial Intelligence:
On the other hand, on February 10 and 11, European leaders met with U.S. Vice President JD Vance at the AI Action Summit held in Paris. The Republican used this occasion to voice unfavorable opinions about the European regulatory model, instead defending investment freedom and experimentation to unleash technological advancement.
At Full Speed:
The investments secured by France and the success of the Le Chat chatbot created on European soil give the EU a new boost of optimism in the AI race, so they might be reluctant to introduce more regulatory limitations beyond those already observed in the AI Act approved in August 2024.
Furthermore, with lighter regulation, Europe aligns itself with the United States in this respect, prioritizing rapid innovation. This is despite evidence that companies like OpenAI have engaged in illegal practices such as copyright infringements to obtain training materials for their language models.
The AI Act already in force establishes different levels of risk depending on the technology used. For example, biometric technology combined with AI represents an “unacceptable risk,” given its potential use for surveillance and control of people.
0 Comments