The AI Act is approved

Aušra Mažutavičienė
Written by
Aušra Mažutavičienė
March 14, 2024

1. The AI Act is now approved

Yesterday (March 13th), the European Parliament approved the AI Act with an overwhelming majority. The law passed with 523 votes in favor, 46 against and 49 abstentions. It will most likely enter into force in May of this year and be fully enforced 24 months later. However, there are certain exceptions:

  • bans on prohibited practises will apply six months after entry into force;
  • codes of practise (nine months after entry into force);
  • general-purpose AI rules including governance (12 months after entry into force); and
  • obligations for high-risk systems (36 months after entry into force).

A risk based approach

The AI Act establishes obligations for AI systems based on their potential risks and risk impact levels. They'll be divided into four main categories: unacceptable, high, limited, and minimal risk systems.

Unacceptable risk: All AI systems that pose a threat to the safety and/or rights of people will be banned. For instance social scoring by governments, toys using voice assistance that encourages dangerous behavior, etc.

High risk: Strict obligations are introduced for high-risk AI systems that relate to for instance critical infrastructure, education and vocational training, employment, essential private and public services, certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. elections). Such AI systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight.

Limited risk: General-purpose AI (GPAI) systems - and the models they are based on - are regarded as limited risk systems. However, they must meet certain transparency requirements, incl. compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Also, artificial or manipulated images, audio, or video content (“deepfakes”) need to be clearly labeled as such.

Minimal risk: the AI Act allows the free use of minimal-risk AI. This includes e.g. AI-enabled video games and spam filters.

Enforcing authorities

The AI Act will be enforced by national authorities, i.e. each EU member country, which will be supported by the new AI Office inside the European Commission.  It will now be up to each member state to set up a national authority - and they (only) have 12 months to do so.

2. The European Commission violates GDPR

This week, it came out that the European Data Protection Supervisor (EDPS) had found that the European Commission's use of Microsoft 365 violates the GDPR.

The Commission allegedly did not take adequate safeguards to ensure compliant personal data transfers outside the EEA. The EDPS required the Commission to suspend all data flows through the use of Microsoft 365 before December 9th 2024.

So what can we learn from this?

First of all - no one is perfect. But more importantly: you need to make sure you assess all your data processors and document this process. If you're in doubt of how to best vet your vendors (for instance Microsoft 365), then join our upcoming webinars.