Privacy Initiatives Mean Nothing If Companies Don’t Follow Through
Tech Companies Need To Be Held Accountable to Guarantee They Protect User Data
The proliferation of AI models means user data will persist as it gets used to train various new models. Companies need to invest in new systems to make sure privacy requests are upheld.
We’ve now arrived at the final post in our series on our 5 principles - thank you to everyone who has followed along our entire … what do you call a series of 5? A quintet? A pentalogy?
Semantics aside, we now arrive at our fifth principle - Accountability.
The late great Kobe counting out his favorite ethical tech principles
Accountability: A business’s technology and its employees must do what it says it will do system and organization-wide. We envision a world where if entities don’t do right by people, businesses are held accountable.
Accountability is like a load-bearing pillar in the construction of ethical technology - take it away, and the whole thing falls apart. Hollow privacy promises to users mean little if they aren’t followed through on, and building systems that are accountable is how we translate organizational objectives into reliable actions.
Need proof? Just ask Amazon.
Amazon’s $25 Million Lie
On the surface, Amazon’s voice-activated Alexa seemed to offer privacy protections for children in compliance with federal law. Parents could ask Amazon to delete their children’s data, granting families the peace of mind that intimate recordings of their kids wouldn’t be used for Amazon’s profit.
Here’s the only problem: Amazon wasn’t deleting the data.
As revealed in a $25 million penalty from the FTC, Amazon didn’t fully remove user data despite their deletion requests. While the data would be deleted in some locations, Amazon retained it for continued use in product development and training data in other parts of the organization.
Not only did Amazon violate their user’s trust, but they also violated federal laws meant to prevent this very thing from happening. The 1998 Children’s Online Privacy Protection Act (COPPA) provides parents the right to delete any of their child’s personal information. Despite promising the opposite, Amazon broke the laws by failing to manage their organization’s data.
The big ethical question is: How should companies balance user’s privacy demands and their goals of creating value from user data?
Given Amazon’s organizational complexity, it’s plausible (though inexcusable) that one arm of the organization simply wasn’t aware of the data deletion requests being received elsewhere and failed to manage their privacy settings across the organization.
These problems will only become more complex and thorny in the age of AI. Will deletion requests mean data is removed from all training data across an entire organization or just a specific dataset? Are all models going to be retrained without the removed data? Is that feasible?
Good intentions won’t be enough. Companies need new privacy infrastructure that allows them to comply with privacy regulations and their promises to users. While one $25 million fine isn’t enough to change how all of Silicon Valley operates, hopefully, regulatory actions like this will cause companies to be more accountable in the future.
Tell us your thoughts! In a perfect world, what policies or legal mechanisms are required to ensure companies are accountable to users for their privacy promises?
What We’re Reading On Ethical Tech This Week:
U.S. to rein in technology that limits Medicare Advantage care - Washington Post
FTC Chair Lina Khan's lawsuit isn't about breaking up Amazon, for now - NPR
Hollywood writers’ strike ends with first-ever protections against AI - VentureBeat
Data privacy law seen as needed precursor to AI regulation - Roll Call