AI Isn’t An Excuse for Companies to Pull a Fast One Over Their Customers
The FTC is letting companies know their sneaky “updates” are really just good-old-fashioned deception.
It’s been a busy year for the FTC. From banning data brokers selling location data to investigating the role of big tech firms in generative AI, the agency has burst into 2024 with a flurry of activity. Amidst all those announcements, one of the stories we are following the most closely is the agency’s increased focus on what most people (at least not readers of this substack!) consider the most boring part of any digital product: the terms and conditions.
What did the FTC say?
The FTC has been letting companies know that unlike 90% of Americans, they are closely reading companies’ terms and conditions documents to make sure they are not engaging in deceptive practices. They made this clear in a recent blog post last month and FTC Chair Lina Khan has been speaking at tech conferences letting companies know that, “firms cannot use claims of innovation as cover for law breaking.”
While this statement is a warning, companies know the FTC is willing to follow through and hold companies accountable for violations. Just last year Ring was fined millions of dollars for failing to notify customers about using their data to train algorithms. These rules extend beyond software companies as well, as the FTC also fined genetic testing company 1Health for changing their privacy policy.
Companies may be tempted to change their policies to justify using data for additional purposes such as training models, but it is clear that America’s enforcement agencies are on the lookout and eager to crack down.
How Will Companies and Consumers React?
The FTC’s order was based on the documented trend of companies trying to sneakily change their terms and conditions without their customers knowing. Zoom, Adobe, and Spotify all got caught changing their terms within the last year to allow additional data uses like training AI models. Customers are wondering whether their likeness and creative work is being used to create the next wave of generative AI tools.
Companies are feeling intense pressure to maximize the use of their data, but very few anticipated the current AI wave and as a result do not have the proper permissions to use their data to train models. While large companies like Zoom have been caught changing their terms, presumably many smaller companies have gotten away with it as their smaller user base resulted in less scrutiny. Every company is facing questions of what to do in the world of AI. Our hope is that the threat of an FTC investigation encourages them to make the ethical choice and not deceive their customers.
Tell us your thoughts! Do you think these warnings from the FTC will change companies’ behavior or not?
What We’re Reading On Ethical (and Non-Ethical) Tech This Week:
Generative AI's privacy problem - Axios
Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies - New York Times
TikTok’s biggest threat just passed the House - Politico
World’s first major act to regulate AI passed by European lawmakers - CNBC
What to Do About the Junkification of the Internet - The Atlantic