The year has just begun, but impactful decisions are already reshaping the privacy landscape. For the first time, the EU’s own General Court has fined the European Commission for breaching its own data protection laws.
Let’s dig into it.
In a January 8 decision, the European General Court set a new precedent by ordering the European Commission to pay damages to an individual for unlawfully transferring data to the U.S. without adequate protections. The court awarded €400 in damages to Thomas Bindl, a plaintiff whose data was unlawfully transferred.
In a previous newsletter, we predicted that 2025 would bring the first regulatory decisions against AI providers. We didn’t have to wait long - Italy’s Data Protection Authority imposed a €15 million fine on OpenAI at the close of 2024.
Italy’s Data Protection Authority (Garante) imposed a fine and also demanded that OpenAI make corrective measures related to:
Italy's authority, Garante, is one of the EU's most proactive regulators in assessing and enforcing AI platform compliance. Last year it briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules.
While OpenAI has called the €15 million fine “disproportionate” and plans to appeal, Garante emphasized that the penalty reflects OpenAI's cooperative stance during the investigation - implying the fine could have been significantly higher had the company not engaged constructively.
Facing regulatory pressures, Meta is killing off its own AI-powered Instagram and Facebook profiles.The company had first introduced these AI-powered profiles in September 2023 but killed off most of them by Summer 2024. However, a few characters remained on the platforms and gained new interest after Meta announced plans to roll out more AI character profiles at the beginning of the year.
While these Meta-generated accounts are being removed, users still have the ability to generate their own AI chatbots. Meta includes a disclaimer on all its chatbots that some messages may be “inaccurate or inappropriate”. However, it’s not clear whether the company is actively moderating these messages or ensuring they comply with policies.
These developments signal an evolving privacy landscape where accountability and transparency are becoming non-negotiable. Both companies and regulators are being challenged to set new standards for data protection and AI governance.