AI Compliance Through a GDPR Lens

Sille SlothSille Sloth
Written by
Sille Sloth
and
-

Loved this article? Share it with your network:

Navigating the AI Act as a Privacy Professional

Around (hopefully, but not necessarily before) May 2018, many new responsibilities were handed out across organisations. I think quite a few employees ended up with job descriptions looking very different from what they had before the GDPR came into effect.

In many companies, the GDPR role landed with legal - in some cases with finance, HR, or even the office all-rounder. Regardless of how it happened, someone had to take on the responsibility.

I actively chose to work with GDPR. But now, I’m navigating a new regulatory landscape that just sort of fell onto my desk and I can better relate to those who had the GDPR role assigned to them. 

Whatever this newer role should be called or where it may best be placed, I think many privacy professionals are already handling a lot of new tasks that are AI-related.

On some levels, the placement of AI compliance with the privacy teams makes sense—both frameworks aim to protect fundamental rights and ensure responsible use of data. However, this shift also presents challenges. Unlike the GDPR, which is fundamentally rights-based, the AI Act is centered around product safety and technical risk management.

And while it’s an exciting development, adapting to a new regulatory framework also comes with challenges—shifting perspectives and learning to approach compliance in a different way.

For management, it can be difficult to grasp why AI compliance is not just an extension of GDPR. But as privacy professionals, we do have strengths we can leverage—and also gaps we (and our management) must acknowledge. Understanding where our expertise applies and where we need to adapt our approach is key to effectively governing AI compliance.

I have outlined a few of the areas where I have used my experience with the GDPR and information security, and where I needed - and still need - to shift my perspective (and of course a little disclaimer: This is in no way an exhaustive list😉)

How GDPR Expertise Can Support AI Compliance:

Compliance Programs
Just like the GDPR, the AI Act requires organisations to establish solid governance structures, policies, and risk management frameworks. New policies and risk parameters need to be drafted, but if you have experience setting up privacy compliance programs, you can be a key contributor to AI governance. 

Transparency and accountability
Both regulations require clear guidelines for use, documentation, and reporting. The accountability principle in the GDPR required organisations to up their focus on documentation and we can use that culture to add on or incorporate the AI documentation.  

Risk Management & Impact Evaluations
As a privacy professional you are constantly making risk assessments, some on the fly and some more formally structured. If you are familiar with DPIAs you know the workload but may also have systems or procedures that with a little effort can be the basis of a Fundamental Rights Impact Assessments (FRIA) and Conformity Assessments for high-risk AI systems, that are introduced by the AI act. You also have the skills to communicate risks and possible mitigations to your stakeholders.

Vendor Management
If you are used to vetting vendors and negotiating data processing agreements, you are already a long way in this regard. Organizations must ensure that AI providers and third-party services comply with regulations. Privacy professionals are already experienced in reviewing contracts and assessing vendors—this now extends to AI system providers.

Data Flow and systems mapping
AI systems often process large amounts of data, including personal data. Privacy professionals already managing data inventories (e.g., GDPR Article 30 records) can help identify AI systems that fall under AI Act regulations. Use your ROPA as a starting point when mapping your AI systems and vendors.

Awareness & Training
The AI Act mandates AI literacy & training (Article 4). Privacy professionals can leverage their experience in compliance training programs to educate stakeholders about AI risks and obligations. We may need to draw on more technical expertise to elaborate on the tech side, but you can incorporate AI training into your annual awareness wheel. 

Stakeholder management
When I first started as a privacy lawyer I quickly realized how much of the job revolved around stakeholder (and project) management, something which was not a great part of the law degree. These skills are necessary to build as a privacy professional and you can definitely leverage them when working on AI compliance. 

Where to Adapt:

From Individual Rights to Product Regulation
The GDPR focuses on protecting individuals' fundamental rights, and while the AI Act introduces a Fundamental Rights Impact Assessment, the Act  regulates the technology itself. It is not about the processing activity but the product. When interpreting the AI Act, you have to shift your perspective to a product security mindset - more like looking at security requirements in e.g. toys or other consumer products. This shift helped me better understand and communicate the individual requirements in the AI Act, when my first instinct was to focus on the purpose, types of data and rights of the subjects.

Technology neutral vs. technology specific
As a privacy professional you can fairly quickly assess if an activity falls under the GDPR or other similar regulations. But the assessment when looking at the AI Act may require a lot more technical insights. Though AI is defined in the regulation, people are in my experience still very unsure when a system constitutes an AI system as defined by the AI Act. I try to break down the definition into smaller more direct questions, and then I have tech teams help me answer if any particular system fits the bill. 

New risk categories
Just as you can use your risk management skills, when working with the AI Act, keep in mind that the AI Act has already defined risk categories that every AI system must be placed in. When assessing a new AI tool either a purchase or in-house development, you need to first identify its risk category, and while we are used to some types of personal data belonging in different categories, this assessment depends much more on the purpose and potential outcomes of the use of the AI tool that what kind of data is being processed in it. 

Ethics and bias in focus, not necessarily privacy
GDPR only applies to personal data, while the AI Act also covers AI systems that do not process personal data but still pose risks (e.g., bias, lack of transparency). Privacy professionals must broaden their scope beyond data protection and learn to address other types of risks. At the same time the AI Act has a looser take on processing of sensitive personal data, and we need to maintain the organisation's focus on still complying with privacy regulations while moving forward. 

What This Means for Privacy Professionals:

🔹 Your expertise in compliance & risk management is crucial—apply it to AI governance.
🔹 AI compliance requires cross-functional collaboration—privacy, compliance, IT, and product teams must work together.
🔹 AI Act and GDPR overlap significantly—you can bridge the gap between data protection and AI regulation.

But also some of these overlaps result in clashes between the two regulations and while the GDPR holds supremacy, working with these uncertainties and sometimes conflicting regulations means constantly adapting to new regulatory guidelines, cases and of course trying to balance the business point of view.

In conclusion

We as privacy professionals are well-versed in privacy rights, risk assessments, and regulatory compliance, but many of us lack the technical expertise needed to assess AI models, understand algorithmic transparency, or evaluate robustness and cybersecurity risks. 

As regulations evolve to address emerging technologies, compliance professionals must continuously expand their skill sets. Moving forward, effective AI compliance will depend on strong cross-functional collaboration between legal, technical, and product teams.