Meta to use your personal photos for AI training?

Aušra Mažutavičienė
Written by
Aušra Mažutavičienė
on
June 25, 2024

The background

Meta announced that to better reflect the "languages, geography, and cultural references" of its European users, it needs to use their public data to train its AI model, LLama. In order to cover this data use, Meta has updated its privacy policy, which will take effect on June 26.

As we all know, developing AI models requires a lot of data. And often, the AI companies are collecting data from open websites. So maybe Meta’s idea of using posts and images uploaded to their platform is their attempt to get a competitive edge. But should that really happen at the expense of their users’ privacy?

It should be noted that Meta confirmed that personal data from accounts of users under the age of 18 will not be used. It’s uncertain to what extent posts and images that are only visible to friends will also be used to develop Meta's AI. But from their privacy notice update, it’s clear that they will be using posts and images made publicly available on Facebook and Instagram. This is a controversial decision and many people are concerned about their content being used in this way.

The complaints

Vienna-based group NOYB, led by activist Max Schrems, lodged complaints last week with 11 national privacy regulators regarding Meta’s AI training plans. They urged the authorities to intervene before Meta begin training Llama’s next generation. In NOYB’s view, Meta violates the GDPR by lacking a valid legal basis to process users' personal data for training AI models and by making it difficult for users to opt out of such processing. The complaints were filed in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain.

Once again, Norway is among the first countries to react to Meta’s data practices. The Norwegian Consumer Council has joined NOYB’s complaint and urged the Norwegian data protection authority (Datatilsynet) to immediately halt Meta’s use of personal data to train its AI models and to assess the legality of these practices. Datatilsynet has already taken a stance, stating that the legality of the new Meta practice is ‘doubtful’ as the most natural thing would have been to ask users for their consent before their posts and images are used in this way.

In this type of case, data supervisory authorities in the EEA will likely cooperate in assessing the situation - and we'll be closely monitoring the developments.

What can you do about it?

Meta says that they're committed to honoring all European opt-outs to the use of their content for AI training. If an objection form is submitted before Llama training begins, then that person’s data won’t be used to train those models (neither in the current training round nor in the future).

If you have a Facebook and/or Instagram account(s), you must initially object to AI training in each account you have. You can submit these forms:

However, NOYB isn't impressed with Meta’s opt-out process, calling it a farce. They argue that Meta makes it overly complicated to object, even demanding personal reasons to complete the form. “In total, Meta requires some 400 million European users to 'object', instead of asking for their consent,” says NOYB.