What Should “Privacy-Safe” Mean for Brands in 2024?
We are bringing you highlights from an event on Friday where our friend JJ from Ketch interviewed Arielle Garcia, privacy extraordinaire and founder of ASG solutions
For those who were unable to participate in last week’s privacy festivities, we are bringing you highlights from an event on Friday where our friend JJ from Ketch interviewed Arielle Garcia, privacy extraordinaire and founder of ASG solutions, who loyal readers will recognize from our live New York event past November.
The following quotes are just a piece of the many insights Arielle shared in her interview. Be sure to check out the full recording to learn more!
Below are some highlights of the conversation, which have been shortened and edited for this post. Arielle’s responses are in regular text, and JJ’s questions are in bold. Enjoy!
You’ve said, “If consumers don’t trust brands, and advertisers can’t trust their partners, sustainable growth is not attainable for anybody.” What does that mean?
The biggest challenge facing the industry is what I call the trust crisis. People simply don’t trust advertising.
[Arielle then described concerning case studies of a woman concerned about the use of her personal data, including menstrual data, consumers being flooded with low-trust ads, and a couple continuing to receive ads for baby supplies after suffering a miscarriage].
The incentive structures are completely broken. You have this ecosystem that is devoid of trust and fundamentally that’s what needs to be fixed.
How do you manage the tension between ROI and responsible data practices? But also, to your point, Is it ok to be flooded with obnoxious ads? What are your thoughts on that?
When I was early in my privacy practitioner phase, I thought [figuring out how to balance effective advertising and respecting people’s privacy] was the thing to be overcome in this industry.
The more that I learned, the more I realized that this is a misconception, a false choice that props up a lot of the interests of the ecosystem. The reality is, for the most part, what’s good and fair for consumers, brands, and publishers are aligned … Many of [marketing’s past metrics] show how easy it is for this industry to fall in love with metrics that aren’t telling the right story and its tendency towards confirmation bias.
As a marketer you’re looking for ways to demonstrate ROI and effectiveness, but in the process users aren’t winning and you’re operating in an illusion of effectiveness - the platforms have the most to gain from this.
This fundamentally is not the right framing to begin with. If marketing and advertising is about engaging with customers and building relationships with people, then respecting their preferences and choices is fundamental, not at odds with effective marketing.
How should marketers start to think about this and engage with privacy teams within their own companies?
Privacy often doesn’t understand marketing and marketing often doesn’t understand privacy. It’s really important to have a baseline understanding of each other’s disciplines.
There’s a misconception among some marketers that privacy or legal might be seen as a roadblock. The reality is that without that partnership, you end up investing in things that are on borrowed time and not solving the problem.
I have such a bad reaction to the term “privacy-safe” because it means absolutely nothing and it’s become a blank easel for ad tech platforms to position something they're doing as solving a problem it actually doesn’t solve.
[Talking about the impact of GenAI and the decline of publishers on the health of democracy]
I’m incredibly worried about what [generative AI] is going to mean for democracy. I think the rise of generative AI and the rise of AI adoption is making this even more of a pressing challenge, and not in the obvious ways that everyone is talking about (e.g. deepfakes or misinformation).
One of the things I find really problematic is how some of these platforms’ suite of marketing products are conditioning marketers to relinquish control and transparency in the process. You take that along with self-graded homework and the affinity for metrics that look good on paper and you get to the place where marketers are being conditioned to not think about where their ads actually run.
Publishers are going to get squeezed and have revenue pressure increase at the same time that you have all these GenAI tools that make it really easy and and really fast to scale disinformation campaigns and all these sites that can easily play the game and be monetized.
The positive thing is that marketers really have the power to stop this if they don’t accept this “opacity creep” and make conscious efforts to prioritize working with quality publishers and journalism. A lot of this can be abated, but unfortunately that’s a big behavior change that I don’t see happening.
Who is responsible for finding that line between personalization and manipulation? Is it the marketer? Is it the platform?
Anyone with a relationship with a consumer has the obligation to think about this, I see it as a big part of what CMO’s should be thinking about … What’s the point of growing your brand if you’re pissing off your customers!? You need to understand your audience and know what makes sense. There is still a place for personalization that’s based on legitimate relationships. It’s not all or nothing! Think about incremental consent. Think about building a relationship with your customers.
Last question - imagine someone is a new Chief Privacy Officer at a brand or publisher and you’re setting goals for your first 100 days. What’s your advice for that person?
Buckle up! To understand what’s happening you have to ask all the questions. Especially in this industry where you can ask the same question fifteen different ways and you’re embarrassed to ask it more, but the reality is that it is confusing by design. Those areas of confusion are exactly where you should be digging in.
Also understand what’s happening, not just around data flows but also where your biggest obstacles will be in your organization. Understand what misconceptions and commercial interests are going to be things you bump up against as you’re trying to do your job because you need to in order to be effective.
Tell us your thoughts! Which of Arielle’s arguments do you agree or disagree with? What questions do you think brands should be asking about privacy in 2024?
What We’re Reading On Ethical (and Non-Ethical) Tech This Week:
Inside Biden’s secret surveillance court - Politico
TSA uses ‘minimum’ data to fine-tune its facial recognition, but some experts still worry - NextGov
N.S.A. Buys Americans’ Internet Data Without Warrants, Letter Says - The New York Times
Facial recognition cameras in supermarkets ‘targeted at poor areas’ in England - The Guardian
The FTC wants to know more about Big Tech’s AI partnerships - Morning Brew