Europe Set to Finalize Comprehensive AI Rules

Europe Set to Finalize Comprehensive AI Rules

European Union lawmakers are on the cusp of finalizing the world’s first comprehensive set of AI rules, with the Artificial Intelligence Act expected to be approved and take effect later this year. The legislation is set to act as a global benchmark for governments grappling with how to regulate the fast-developing technology. The AI Act aims to ensure a human-centric approach to AI, with humans remaining in control of the technology, and focuses on leveraging AI to drive economic growth, societal progress, and unlock human potential.

“The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” said Dragos Tudorache, a Romanian lawmaker who was involved in the Parliament negotiations on the draft law.

The AI Act adopts a risk-based approach, with different levels of regulations for AI applications depending on their level of risk. Low-risk systems, such as content recommendation systems or spam filters, will face lighter rules, while high-risk uses of AI, such as in medical devices or critical infrastructure, will face stricter requirements such as using high-quality data and providing clear information to users. The legislation also prohibits certain uses of AI that are deemed to pose an unacceptable risk, including social scoring systems, certain types of predictive policing, and emotion recognition systems in school and workplaces.

One significant addition to the law’s early drafts was provisions for generative AI models, such as OpenAI’s ChatGPT, which can produce unique and lifelike responses. Developers of these models will have to provide a detailed summary of the data used to train the systems, follow EU copyright law, and label AI-generated deepfake content. The largest and most powerful AI models will undergo extra scrutiny due to their potential systemic risks and the possibility of spreading harmful biases.

The European Union’s AI rules are expected to influence global AI governance. In the United States, President Joe Biden signed an executive order on AI in October 2023, and lawmakers in several states are working on their own AI legislation. China has proposed its own Global AI Governance Initiative, and other countries and international groupings, such as Brazil, Japan, the United Nations, and the Group of Seven industrialized nations, are also developing AI regulations.

The AI Act is on track to become law by May or June, with provisions taking effect in stages. EU member countries will need to ban prohibited AI systems six months after the rules enter into force, and rules for general-purpose AI systems like chatbots will apply a year later. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force. Each EU country will establish its own AI watchdog, where citizens can file complaints if they believe the rules have been violated. Additionally, Brussels will create an AI Office responsible for enforcing and supervising the law. Violations of the AI Act could result in fines of up to 35 million euros ($38 million) or 7% of a company’s global revenue.

With the AI Act, Europe is cementing its position as a global leader in AI regulation. The legislation aims to strike a balance between promoting innovation and protecting individuals, while setting a precedent for other countries to follow. As AI continues to reshape our world, these rules will guide the responsible development and use of the technology, ensuring that it remains a force for good.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.