Photos of your children are being used to train AI without your permission, and there’s nothing you can do about it
Without legislation to protect consumers from predatory AI scraping, even our most personal information is at risk of becoming training for AI models.
Note: This editorial previously appeared The Hill
Human Rights Watch just completed a sweeping audit of AI training materials and revealed that pictures of children scraped from the internet were used to train models — without the consent of the children or their families.
This already isn’t great, but it gets much worse.
According to HRW: “Some children’s names are listed in the accompanying caption or the URL where the image is stored. In many cases, their identities are easily traceable, including information on when and where the child was at the time their photo was taken.”
Did I mention it gets worse? Many of the images that were scraped weren’t publicly available on the internet but were hidden behind privacy settings on popular social media sites.
In other words, some parents who thought they were doing everything right in sharing images of their kids are about to find out just how wrong they were.
I’m not unsympathetic. I’m from Australia. I live with my wife and kids in the states. There was a time when social media seemed like the perfect vehicle to keep friends and loved ones up to date on my growing family. Ultimately, I realized that I was violating my kid’s privacy — and that later in life, they might not want these pictures online and available.
Sharenting — posting information, pictures and stories about your kid’s life online — has increasingly been under fire for a lot of very legitimate reasons. A three-year-old can’t meaningfully consent to their parents sharing their potty training fail video for the world to see. It might seem like innocent enough fun, but a three-year-old doesn’t stay three years old forever, and today’s children will have extensive information about them online well before they’re of consenting age.
But aside from a child not being able to consent, HRW’s report reveals that adult parents have no way of knowing what the long-term implications of sharenting might be. Ten years ago, nobody imagined that the photo album they shared of their family vacation might be ingested into machine learning. There are real unintended consequences already rolling out.
Of course, a reasonable reading might be that this shouldn’t be allowed at all. Why do for-profit AI companies have the right to train on anybody else’s data? Let alone children’s? Let alone data hidden behind privacy settings?
Surely the Federal Trade Commission will have something to say about this. Except that, as of last month, the FTC and every other federal agency had its hands tied behind its back when the Supreme Court ruled against the Chevron doctrine — taking power out of the hands of federal agencies and giving them to the courts.
“In one fell swoop, the majority today gives itself exclusive power over every open issue—no matter how expertise-driven or policy-laden—involving the meaning of regulatory law,” wrote Justice Elena Kagan in her dissent from the ruling. “As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar.”
If a federal privacy law wasn’t cooked before, it’s certainly cooked now. The overwhelming result will be to push privacy legislation back to the states. Meanwhile, federal decisions will stay in limbo as understaffed courts with no special insight on privacy try to wade through a workload that they are neither prepared or equipped for.
While we wait, AI will continue scraping kids’ data — and, ultimately, whether or not that’s a perfectly legal thing to do will come down to the state you live in.
Sharing photos of your kid’s little league game might be a fun way to stay connected to family near and far, but until meaningful protections are in place, it’s a risk I wouldn’t advise anybody to take. We deserve data dignity, we deserve ethical technology, we deserve sound and responsible guardrails for AI. At present, we have none of that — and the Supreme Court’s decision adds a significant hurdle to ever achieving those things.
In the meantime, Big Tech has been left to make its own rules. Perhaps the only way to get their attention is to delete the apps, stop posting and cease feeding the beast.
State legislators can’t act fast enough
.
What We’re Reading on Ethical Tech This Week
Every week, we round up the latest in Ethical Tech. Subscribe now and also get our monthly digest, the Ethical Tech News Roundup!
Infosecurity Magazine - Apple Must Convince Us to Trust AI With Our Data
Can Apple Intelligence solve the issues of AI security?
PRN - OneTrust Named to the Forbes Cloud 100 for Sixth Consecutive Year
OneTrust– the market leading platform for responsible AI use– continues to see growing success.
Bleeping Computer - X faces GDPR complaints for unauthorized use of data for AI training
X may have covertly used millions of users' data to train their Grok AI.
Performance Marketing World - Navigating the compliance maze: B2B marketing in a world of data privacy regulations
Keeping up with data privacy compliance in B2B marketing.
The Conversation - A bipartisan data-privacy law could backfire on small businesses − 2 marketing professors explain why
Analysis of the potential downsides to a privacy bill.
CSIS - Protecting Data Privacy as a Baseline for Responsible AI
How should we protect our privacy in the ever-accelerating world of AI?
Digiday - WTF is surveillance pricing?
Are we moving towards a world where the price of goods will be determined by your personal data?
Computer Weekly - Australia’s cyber security skills gap remains pressing issue
Without sufficient cyber-security professionals, Australia faces an increased risk of data breaches.
NRF - ISED’s consultation on AI compute: Canada’s race to remain globally competitive
Canada is pushing into the growing AI and computational resource market.