A few days ago, it came again – the email from Meta reminding us that we’re now training material for their artificial intelligence.
As before, I’ve opted out. It’s starting to feel like a yearly spring ritual – like changing tires, just a bit more digital and a lot less optional. Meta states that they use “public information, such as posts and comments from accounts belonging to people over 18” to improve their AI services. They emphasize that this is done under “legitimate interests” as the legal basis. And yes – you can object. If you know how.
But is it really okay?
According to the GDPR, processing of personal data is supposed to be based on active consent – especially for purposes that are not necessary for the service you signed up for. Using publicly shared posts to train AI falls into a grey area, and there’s ongoing debate as to whether it’s even legal without explicit permission.
Meta relies on the so-called “legitimate interest” principle, but this requires that their interest outweighs the individual’s right to privacy – and that the user could reasonably expect their data to be used that way. That’s hardly the case for old posts – or for people who are no longer alive.
And yet, users must find the form, understand what’s at stake, and actively opt out. That contradicts both the spirit and practice of GDPR, where the default should be that you give consent, not that you have to withdraw it.
But that’s not what hit me this time. What struck me was something else entirely:
What about those who can’t opt out?
My mother passed away a few years ago. She had a Facebook account, which I – like so many others – turned into a memorial page. A small digital space for family and friends to remember, to share memories, to say goodbye.
But in Meta’s system, her account still exists. Her photos. Her words. Her comments. Everything is still there.
And Meta offers no way for me, as her relative and the account’s designated manager, to opt her out of being used to train artificial intelligence. She can’t give consent. I can’t object on her behalf. Yet her data remains accessible, and as far as I know, Meta has no systematic exemption mechanism for memorialized accounts.
So in practice, even the dead are used as AI fuel.
What does Meta actually say about memorial accounts?
When a Facebook profile is turned into a memorial page, all content remains – it’s not deleted. It stays visible to the audience it was originally shared with, such as “Friends” or “Public.” After criticism in 2019, Meta made changes to prevent memorial accounts from showing up in birthday reminders or friend suggestions. They used AI to filter these out of sensitive features.
But beyond that, there is no information suggesting that memorial accounts are treated differently in the background.
Public content from these accounts appears to be subject to the same policies as any other publicly available data – and therefore can, as far as we know, be included in the dataset Meta uses to train its generative AI models.
There are no explicit exemptions for deceased users, and no way for relatives to opt them out of such use.
A digital ethical blind spot
This isn’t just about privacy. It’s about dignity.
It’s about the idea that someone’s final words – perhaps meant only for close friends – may now be part of a dataset powering a machine learning model. Generating new images, responses, and words – without the sender having any control. Or pulse.
And how many are affected? How many thousands – or millions – of Facebook accounts have been turned into memorial pages?
How many of them include poems, life lessons, pictures from family events, or reflections from a hospital bed?
All of this is available if it was ever public. And that makes it valid training material – unless someone says no.
But the dead don’t say no.
A hole in the system
When I submitted my objection, I added this message:
“I manage my deceased mother’s Facebook account, which is now a memorial page. She obviously cannot consent – and as her next of kin, I’m also unable to object to her data being used to train AI. This means that Meta can freely use her pictures, words, and interactions as training data for its models, without any form of oversight or control from those left behind. It feels unethical and disrespectful, and there’s currently no functioning solution to protect the data of the deceased in this context.”
The response I received was a generic message. No answer to the question. No mention of memorial pages. No insight. Just a reminder that I can’t reply to their email.
And this wasn’t the first time I raised the issue. I did the same when Meta sent a similar email last year – and back then, I contacted both the Norwegian Data Protection Authority and tipped off the media. But no one took it further. No one could say anything concrete about how Meta handles data from memorial pages. And no one wanted to lift the case.
The silence continues.
What now?
This deserves public attention. Meta needs to face questions they can’t ignore.
Maybe we need new regulations for how companies can use data from deceased individuals.
Maybe memorial accounts should automatically be excluded from AI training.
Maybe relatives should have the right to object – not just on behalf of themselves, but for those who can no longer speak for themselves.
Because if we don’t set boundaries, we’re not just giving away memories.
We’re giving away voices.
What do you think?
Have you thought about this before – or do you know someone who’s experienced something similar? I’d love to hear your thoughts. Leave a comment below – maybe the silence won’t last forever if enough of us speak up.