European Parliament Approves AI Act: A Step Towards Regulating AI in the EU

European Parliament Approves AI Act: A Step Towards Regulating AI in the EU

Last week, the European Parliament approved the highly anticipated Artificial Intelligence (AI) Act, a monumental piece of EU regulation that will introduce strict controls and responsibilities on the rapidly evolving AI sector. Despite some protections being stripped out due to industry lobbying, the legislation represents a significant step forward in regulating this promising yet potentially threatening technology. The EU once again demonstrates its position as a global leader in developing rules for technology and data, following the success of landmark legislations such as the General Data Protection Regulation (GDPR) and the Digital Services and Digital Markets Acts (DSA and DMA).

The AI Act adopts a risk-based approach, meaning that the level of regulation will depend on the potential risks and impacts associated with a specific type of AI. Companies will not have the luxury of self-regulation but will instead be subject to stricter controls and safeguards for high-risk AIs used in critical sectors such as education, healthcare, banking, and law enforcement. These AIs will have additional requirements for risk assessment, transparency, accuracy, and oversight. Furthermore, citizens will have the right to submit complaints and receive explanations about decisions made by AIs.

Notably, the Act imposes controls on general-purpose AIs and large language models based on them, such as OpenAI’s ChatGPT and Google’s Gemini. These models will have to comply with EU copyright laws and provide summaries of the materials used for training. This requirement aims to address issues related to hidden biases and content ownership, which have been subjects of legal cases and concerns raised by activists and researchers.

Additionally, the legislation tackles the problem of deceptive media, requiring clear labeling and detectability of “deep fake” images and videos through techniques like digital watermarks. Certain AI applications, such as categorization systems based on sensitive characteristics and real-time facial recognition, are outright banned. However, concessions were made for law enforcement, allowing limited use of biometric data and facial recognition to combat serious crime. Critics argue that this may lead to excessive surveillance of vulnerable groups without adequate personal and redress rights.

Despite these regulatory measures, challenges remain. Many of the Act’s stipulations lack enforceable technology solutions, raising concerns about interpretation and practical implementation. Basic requirements like workable watermarks are still absent, highlighting the need for further development and clarity in the future. There is also the issue of regulating AI when there is a lack of understanding of how it works. Unlike other tech and software regulations that demand transparency, the complex nature of AI poses unique challenges. Understanding AI algorithms and holding companies accountable for their actions presents a formidable task that must be addressed.

As highlighted by security experts Nathan Sanders and Bruce Schneier, the AI Act falls short in addressing deep structural issues inherent in the AI and social media industries. Both industries share common problems, including invasive surveillance, dangerous virality, platform lock-in, and monopolistic practices. These issues require further attention and solutions beyond the scope of the current legislation.

While the AI Act is a commendable start in regulating AI, it should be viewed as an evolving framework that needs continuous adaptation and refinement. Some AI applications may need to be restricted until the Act’s enforceability matches its intended goals. To effectively address the challenges and threats posed by AI, a comprehensive approach that tackles the broader structural issues is necessary.

As the EU pioneers AI regulation, the world will be watching closely to see how this groundbreaking legislation unfolds in practice. With technology evolving rapidly, it is crucial to strike a delicate balance between promoting innovation and mitigating risks to ensure a responsible and ethical AI landscape.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.