The AI Act and other related laws: Embracing the legislation that is shaping the future of AI

Giulia CarnàGiulia Carnà
Written by
Giulia Carnà
and
-
April 5, 2024

Loved this article? Share it with your network:

The EU has identified the EU Artificial Intelligence Act, (“AI Act”) a fundamental component of the EU that will work in conjunction with other laws, such as the General Data Protection Regulation (“GDPR”) and with gaps and overlaps between legal framework linked to it, such as the  above mentioned GDPR, the Artificial Intelligence Liability Directive (“AILD”), the EU Product Liability Directive, the Cyber Resilience Act (“CRA”). The EU intends for the AI Act to have the same impact as the GDPR, which means that it will significantly affect global markets and practices and other jurisdictions may also use it as a model to implement their own AI legislation.

Furthermore, EU officials have announced their plan to develop additional, more targeted AI laws after the EU elections in June 2024. These laws will regulate the use of AI in employment and provide specific legislation on copyright and AI.

This article takes a practical approach to the AI Act, without describing the legislative process or all the  amendments since its first proposal by the European Commission on April 21, 2021.

Adoption and Implementation

It is worth mentioning that the AI Act was formally adopted by the European Parliament on March 13, 2024 and before becoming law, it will be subject to the Council's formal endorsement. It is expected to come into effect in the next few months, most likely towards the end of April or early May 2024.

EU Product Liability Directive

On March 12, 2024, just one day before adopting the AI Act, the European Parliament formally adopted the EU Product Liability Directive, which aims to regulate new technologies such as AI and make changes to the EU rules and procedures for consumers to obtain compensation for damage caused by defective products. The revised EU Directive clarifies that a “provider” of an AI system under the AI Act is considered a “manufacturer” under the Directive, making the AI system provider primarily liable for harm caused by AI systems. Importantly, the EU Directive simplifies the burden of proof on consumers in relation to complex or ‘black box’ AI, where the consumer has difficulties proving the product is defective due to the “technical or scientific complexity” of the product.

The Council will now formally adopt the directive, after which it will be published in the EU Official Journal. The new rules will apply to products placed on the market 24 months after entry into force of the Directive, according to EU Member State national implementing law.

Similarly, the AILD follows a congruous approach to the EU Product Liability Directive by creating a presumption of causality, reducing the burden of proof for victims and regulating the power of national courts to order the disclosure of the 'black box' of high-risk AI systems to obtain evidence. While the AI Act aims to prevent harm caused by AI, the AILD aims to regulate compensation for harm caused by AI through the application of liability law.

Cyber Resilience Act

On the same day that the European Parliament formally adopted the EU Product Liability Directive, the   Parliament approved the Cyber Resilience Act. The CRA imposes cybersecurity assessments and requirements on products with digital elements (“PDEs”).   The legislation will now have to be formally adopted by Council, too, in order to come into law. In relation to high-risk AI systems, the CRA explicitly provides that PDEs which also qualify as high-risk AI systems under the AI Act, will be deemed in compliance with the AI Act’s cybersecurity requirements where they fulfil the corresponding requirements of the CRA.

AI Act & the GDPR

Analyzing the AI Act and the GDPR, both legislations have different scopes, definitions and requirements;  this can create difficulties for deployers, providers of AI systems, and data subjects whose personal data is processed through these systems.

Scope and Obligations

One of the main differences between the AI Act and the GDPR is their scope of application. The AI Act is applicable to providers, deployers, importers, authorized representatives, distributors, and operators of AI systems used in the EU market, irrespective of their location. On the other hand, the GDPR applies to controllers and processors who process personal data in the context of an establishment of a controller or processor in the EU, or offer goods or services to or monitor the behavior of data subjects in the EU. This means that AI systems that process personal data of non-EU individuals or do not process personal data may fall under the AI Act but not under GDPR.

Providers and deployers have specific obligations under the AI Act, with most of the regulatory burden placed on providers, especially in the context of high-risk AI systems. During the development phase,  providers will generally act as controllers given the GDPR context. As per Article 35 of the GDPR, controllers are required to conduct a data protection impact assessment (DPIA). The AI Act acknowledges this obligation in Article 26(9) by stating that users of high-risk AI systems must use the information provided by the provider, pursuant to Article 13 of the AI Act, to conduct DPIAs, since high-risk systems often process personal data. I wish to read further guidance from authorities like the EDPB on the use and interpretation of terminology and assessment methods, as well as coordination between relevant authorities to ensure consistent interpretation.

The AI Act's risk- based approach

Focusing on the AI Act, this piece of legislation takes a risk-based approach and categorizes AI systems into four different risk categories: unacceptable risk (prohibited), high risk, limited risk and minimal risk.

The prohibited AI systems, as described in Article 5 of the AI Act, must be phased out within six months of the AI Act coming into force. These systems include those that pose a significant risk to fundamental rights, safety, or health, such as social credit scoring systems, emotion-recognition systems that use biometric data in the workplace and education institutions (except for medical or safety reasons), untargeted scraping of facial images for facial recognition from the internet or CCTV footage,  AI systems used for making risk assessments of natural persons based solely on the profiling of a person or on assessing their personality traits and characteristics, except for AI systems used to support human assessment of the involvement of a person in criminal activity.

High-risk AI systems are those that are covered by certain EU harmonization legislation and are intended to be used as a safety component of a product or the AI system is itself a product covered by the EU harmonization legislation listed in Annex I and the product or system must undergo a third-party conformity assessment under the EU harmonization legislation listed in Annex I. Additionally, AI systems referred to in Annex III are considered to be high-risk, and a provider who considers that an AI system referred to in Annex III is not high-risk must document its assessment before placing that system on the market or putting it into service.

Limited risk AI systems are not high-risk, but they do pose transparency risks. Therefore, they are subject to specific transparency requirements under the AI Act, such as  providers must ensure that users are aware that they are interacting with a machine.

On the other hand, minimal risk AI systems can be freely used without any additional requirements mandated by the Act. Examples of these systems include video games, weather forecasting algorithms, and language translation tools.

In addition to the above categories of risk assessed AI systems, the AI Act imposes specific obligations on providers of generative AI models. These models are used to create general purpose AI systems, like ChatGPT. In this instance, providers are required to perform fundamental rights impact assessments and conformity assessments, implement risk management and quality management systems to continually assess and mitigate systemic risks, inform individuals when they interact with AI, and test and monitor for accuracy and cybersecurity.

Ensuring Effective Compliance

In order to comply effectively with all the applicable legislations, it is important to consider how they interact and identify any potential gaps. The first step for a company should be to identify the applicable laws and any additional sector-specific regulations and then map out the potential areas of overlap. After this, it is crucial to conduct or update a risk assessment, including DPIAs and any necessary assessments, updating policies, notices accordingly. It is also important to leverage the processes already in place for privacy and procurement compliance.