EU agrees on provisional rules for AI regulation

EU agrees on provisional rules for AI regulation

Late last Friday, the European Union (EU) agreed to the world’s first set of provisional rules to regulate artificial intelligence (AI), a move hailed as historic but sparking mixed reactions from various stakeholders. The EU AI Act categorizes AI applications into four risk levels, with the strictest rules imposed on high-risk and prohibited AI. One of the key points of negotiation was the regulation of foundation models, the technology underlying OpenAI’s ChatGPT. France and Germany, in particular, warned against over-regulation to protect their champion AI start-ups.

French President Emmanuel Macron expressed concerns about the potential negative impact of over-regulation on innovation in Europe, stating, “We are all very far behind the Chinese and the Americans.” The EU plans to regulate foundation models by requiring developers to provide documentation on training methods and data, as well as granting users the right to lodge complaints and prohibiting discrimination. Non-compliant companies face fines of €35 million or 7% of global revenue, which some argue is excessive.

Critics such as the Computer & Communications Industry Association (CCIA) emphasized the departure from the “sensible risk-based approach” proposed by the Commission, warning that the Act could hinder innovation and lead to an exodus of AI talent from Europe. France Digitale, representing European start-ups and investors, raised concerns about the long and costly CE marking process for high-risk AI, potential disclosure of private business models, and the risk of changing regulations through delegated acts, which could undermine start-ups' predictability and development.

The EU AI Act also addresses copyright issues surrounding AI models trained on online materials. It includes strict copyright rules, requiring compliance with the EU’s current copyright law and public disclosure of content used for training general-purpose AI models. Véronique Desbrosse, the general manager of The European Authors' Societies, a group representing 32 European author societies, welcomed the Act’s transparency requirements and adherence to EU law to protect rightsholders' rights.

The Act imposes strict restrictions on facial recognition technology and other behavioral signals, with limited exceptions for law enforcement. It has been well-received, as have the data protection rules, with the Act designed to complement the EU’s General Data Protection Regulation (GDPR). However, there are concerns about potential challenges in applying the Act to general-purpose AI systems and the compatibility of an international surveillance system with evolving cybersecurity standards.

While the finalization of the draft text is still ongoing, with the process expected to continue until January 2024 or beyond, the new European Parliament elections in June could influence the remaining items to be agreed upon. Benjamin Docquir, head of IT and data at international legal office Osborne Clarke, noted that the AI Liability Directive might need to be addressed by the new Parliament and Commission. The regulation of open-source AI software, which allows code reuse, and AI in the workplace are also among the factors to be decided.

As AI technology advances rapidly and the EU AI Act is unlikely to be enforced for another two years, the regulation may already be outdated despite efforts to make it flexible. Docquir highlighted the challenge of future-proofing such a powerful technology, particularly with the emergence of generative AI. While the EU’s AI Act is a significant step towards regulating AI, its impact and effectiveness will become clearer in the coming years as the implementation process unfolds.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.