EU officials made history last week by enduring almost 36 hours of grueling debate in order to finally settle on a first of its kind, comprehensive AI safety and transparency framework – the AI Act.
Let's dive in.
The AI Act is the new legal framework that sets crucial requirements for AI systems developers, deployers (such as OpenAI) and users.
The AI Act takes a “risk-based approach” to products or services that use AI and focuses on regulating uses of AI rather than the technology. The riskier an AI application is, the stiffer the rules.
The new legislation prohibits AI systems which pose an "unacceptable risk" from being deployed in the EU and in other cases imposes different levels of obligations on AI systems that are categorised as "high risk" or "limited risk".
AI developers and deployers will also need to comply with EU copyright law and summarize the content they used for training.
The final text of the legislation has yet to be published.
The AI Act won’t take effect until two years after final approval from European lawmakers, expected in early 2024. So it is likely that the AI Act will come into effect in 2026.
The EU will urge companies to begin voluntarily following the rules in the interim by launching an "AI Pact". But there are no penalties if they don't.
The AI Act will apply to providers, deployers and users of in-scope AI systems used in the EU, irrespective of where they’re established. So providers and deployers of AI systems in third countries, e.g. the US, will have to comply with the AI Act if the output of the system is used in the EU.
The requirements of the AI Act differ depending on the risk level posed by the AI system.
For example, AI systems presenting a limited risk will be subject to more light touch transparency obligations, such as informing users that the content they are engaging with is AI generated.
High-risk AI systems will be subject to tougher requirements and obligations, such as the need to carry out a mandatory fundamental rights impact assessment. People will have a right to receive explanations about decisions based on the use of high-risk AI systems that affect their rights.
AI systems presenting unacceptable risk will be banned.
Examples include:
Violations of the AI Act could draw fines of up to 35 million euros ($38 million) or 7% of a company’s global revenue:
In addition, the political agreement envisages more proportionate caps on administrative fines for SMEs and start-ups.
It’s the first comprehensive AI legislation. This law might therefore become the regulatory standards — much as it did on privacy rules.
It’s difficult to predict what the AI landscape will look like in 2026. Things are moving fast.