Data Privacy in the Age of AI Means Moving Beyond Buzzwords
Privacy is possible, but only if companies move beyond empty promises and commit to ethical data practices.
By Dr. Maritza Johnson, Ethical Tech Project Board Member
Note: This editorial originally appeared in the May 2024 edition of Information Week
As tech companies fixate on taking advantage of the latest developments in artificial intelligence, people’s privacy concerns are being disregarded in the pursuit of new features and profit opportunities.
Many companies justify their actions by upholding a false narrative that people do not truly care about privacy, but any perceived apathy is a product of generations of companies choosing to not invest in giving customers meaningful privacy choices. Privacy is not dead, if anything it is more relevant than ever in the face of emerging AI tools built on people’s data. Companies need to acknowledge the importance of privacy and start investing accordingly.
In reality, it is companies themselves, not consumers, that often disregard privacy concerns. Look no further than the recent data breach at 23andMe as an example of a corporation blaming everyone but themselves for their own mistakes.
In recent months the company disclosed a data leak affecting half their customers, approximately 7 million people. For many, this included genetic information, sensitive health information, and a list of their relatives. Instead of acknowledging their own privacy failures, the company has responded by blaming users for not updating their passwords and downplaying the breach by claiming the information, “cannot be used for any harm.” The company is now being sued by users in a class-action lawsuit for negligence.
We do not have to live in a world of endless breaches and privacy violations. Companies can and should prioritize privacy to maintain trust with their customers, but this does not happen by accident. Instead, it requires an unequivocal commitment to privacy from both executives and builders and an ongoing investment of resources. It is not enough to say your company is applying “privacy by design” without actually translating privacy into real company policy and practices. Privacy considerations must be in the center of product decisions from the moment you decide to use people’s data, not be added on at the end in the form of a half-hearted “retrofit”.
Building for privacy will require assessing whether a company’s existing privacy metrics indicate anything of relevance. For example, simply having roles with “privacy” in the title is not an effective measure of a privacy practice. In the same vein, headcount is not a privacy solution. Just because Meta proudly claims it has 40,000 people working on their safety and security teams does not change the fact that, according to Consumer Reports, the average Facebook consumer has their data shared by over 2,000 different companies. Instead, companies should be focusing on metrics that evaluate data protection, customer trust and the enforcement of tangible privacy measures throughout an entire organization.
The relationship between ROI and privacy may appear at odds, but it’s a false equivalence. If you respect your customer, respect their data. This has to come from the top. This is a challenge for corporate leaders who are incentivized to focus on big, sexy innovation projects instead of mitigating privacy risks. We see this right now as companies rush to hire “chief AI officers” and deploy AI tools while the privacy implications of those tools remain an afterthought.
Leaders should care about privacy not just because it is ethical, but also because it is good for business. Privacy builds trust with your customers and increases their lifetime value to your organization. Polling from the Ethical Tech Project found privacy features increased consumer purchasing intent by more than 15% and increased trust by over 17%. Effective privacy measures also strengthen a company’s reputation, differentiate their product, and protect against ending up on the wrong side of an investigation by the Federal Trade Commission or a state attorney general.
Good privacy practices are possible, and they are attainable with a sustained, committed effort from corporate leadership and everyone who works with data. Thankfully, strategies exist to help business leaders. Two examples I am familiar with, among many, include The Ethical Tech Project’s privacy stack, a privacy reference architecture for technical teams, as well as the Center for Financial Inclusion’s privacy toolkit for inclusive financial products.
Privacy, or the lack of privacy, in modern technology products is a choice that every company faces. For the sake of their companies, corporate leaders can and should invest in offering their customers meaningful privacy options instead of empty promises.
Dr. Maritza Johnson, Ph.D., formerly with Facebook and Google, is a Principal at Good Research, Board Member at the Ethical Tech Project, and was the Founding Director of the Center for Digital Civil Society at University of San Diego.
What We’re Reading on Ethical Tech This Week
Every week, we round up the latest in Ethical Tech. Subscribe now and also get our monthly digest, the Ethical Tech News Roundup!
The Hill - Advertisers, business groups want ‘significant changes’ to data privacy bill
How might significant alterations to the data privacy bill impact the future of online advertising and consumer trust?
Forbes - AI's Emerging Privacy Threats: A Strategic Guide For Business Leaders
With AI advancing rapidly, what steps should businesses take to balance innovation with privacy concerns?
NYT - Why the U.S. Is Forcing TikTok to Be Sold or Banned
Millions of TikTok users would face a major disruption in their entertainment and communication routines if the ban is enforced, but are TikTok’s ethical data issues shared by other major social media platforms?
FT - Privacy fears sap potential of female fertility tech start-ups
How can female fertility tech startups overcome privacy fears to unlock their full potential?
NextGov - House pivots on data privacy bill, removing algorithmic discrimination coverage
With the removal of algorithmic discrimination coverage, are we losing essential safeguards in our data privacy laws?
IAPP - Ahead of 2025 federal election, will Canada pass Bill C-27?
As Canada considers Bill C-27, the country stands on the brink of major privacy law reforms.
SCMagazine - Executives bullish about AI capabilities, but worry about data privacy and security
With so much confidence in AI, how should executives tackle the ethical and bias challenges the article mentions?
Digital Journal - https://www.digitaljournal.com/life/data-privacy-change-and-what-consumers-actually-want/article
Are Businesses Truly Respecting Your Privacy Preferences?
AdNews - Media agencies weigh in: Australia’s privacy laws vs GDPR
With Australia’s new privacy laws on the horizon, are media agencies prepared to handle the stricter consent requirements?
Biometric Update - Biometrics developers dance with data privacy regulations continues
How are biometrics developers adapting to the ever-changing landscape of global data privacy regulations?
Conversations in Ethical Tech
In this episode, JJ and Maritza explore the concept of dark patterns, meaning deceptive design practices that manipulate users into taking actions they didn't intend to.
How does the choice architecture of the tech products you use impact your personal privacy and data protection?
How can good product design and business outcomes be balanced against avoiding deceptive practices?
Learn a few common-sense suggestions that you can take as a developer, leader, or consumer, including providing feedback to companies, filing complaints with regulatory bodies, and raising awareness about dark patterns.