Global Democracy Has a Data Problem
This week we’re featuring a piece written by Jonathan Joseph (JJ), our favorite privacy nerd and Board Advisor to the Ethical Tech Project.
Below is a shortened version, make sure to check out the full-length version where it ran in The Hill!
As the U.S. approaches election season the public is increasingly concerned about how AI is going to be used to spread misinformation through the unchecked use of deepfakes. As JJ writes:
“We’re already seeing the warning signs. In New Hampshire, primary voters received AI-generated calls in which President Joe Biden told them to stay home on primary day. A super PAC was found to be using AI to create a deepfake version of Democratic challenger Dean Phillips. Elsewhere in the world, politicians from London to Lahore are now the subject of deepfaked audio and video, and experts say the concept of truth itself is now being destabilized by ubiquitous GenAI technologies.”
As he notes, there are a number of proposals from the public and private sector to avoid catastrophe, but none of these are enough. “Efforts to flag deepfakes or govern their circulation will fall flat unless our efforts at AI regulation are accompanied by robust measures to give people real agency, and the power to determine how their data is used and what kinds of data-driven digital content they experience.”
JJ then takes a deeper look at the proposals pushed by OpenAI, particularly around labeling AI images, to demonstrate the need for stronger protections:
“Much of what OpenAI is proposing boils down to the implementation of nutritional labels for AI-generated content, allowing people to understand what they’re viewing and see how it was created. That isn’t bad, but telling people what went into the content they’re being shown is just one piece of a bigger puzzle.
Look at it this way: a nutritional label tells you how the sausage got made, but that doesn’t do the pig much good. When it comes to GenAI, we aren’t just consumers: we’re part of the product. That means passively acknowledging how data was used isn’t enough—we need real agency over what’s being done with our data.”
JJ then explains what the solution would entail:
“Instead, we need to take steps to put consumers in control of how they interact with AI models and AI-generated content. A user might want to say: never use my likeness in an AI image generator, and never use my data to tailor the GenAI content that I’m shown. They shouldn’t have to articulate those preferences each and every time they view an image or a video; instead, they should be able to state their preferences once and have them reflected across their entire digital universe.
Ultimately, this boils down to trust—and unless we’re providing people with real control and agency over how their data is used and where it flows, there’s simply no reason for them to trust either AI technologies or the organizations that create and operate them. That’s especially corrosive in the political arena—it’s where the idea that truth itself is unknowable creeps in, and it quickly leads to cynicism and political disengagement. But it’s part of a broader problem that impacts every part of the AI economy and of the broader data economy from which it’s built.”
Tell us your thoughts! How are you feeling about the role of technology in the upcoming elections? What protections would you like to see in place to prevent rampant misinformation?
What We’re Reading On Ethical (and Non-Ethical) Tech This Week:
It’s open season on personal data: We need a Data Protection Agency now - The Hill
White House touts new AI safety consortium: Over 200 leading firms to test and evaluate models - Venture Beat
Tech companies will not save our kids: ICCL speaks at Oireachtas Children’s Committee - ICCL
FTC Proposes New Protections to Combat AI Impersonation of Individuals - FTC