Can We Make AI Fair? Consumer Worry Bias Is Coded-IN
When algorithms shape human lives, we need guardrails ensuring equitable treatment for all
From startups to Big Tech to the Fortune 500, there is a modern rush of AI-powered tooling at every level of commerce. Our polling shows people are worried about whether or not those applications are fair. Companies must anticipate how AI can go drastically awry - and be proactive in ensuring their products are inclusive.
Welcome to Part 4 of our deep dive into how Americans feel about our 5 core principles of ethical data use in the Age of AI. Unlike Indiana Jones, we promise the 4th installment will be our best one yet! After previously diving into transparency, this week we’re focused on fairness.
Fairness: Businesses must measure and mitigate the impact of data systems and the outputs in machine learning, intelligent systems, and artificial intelligence that may have disparate impact or bias in application.
There are countless examples of how unfair AI can go horribly wrong. Researchers have documented the capacity for AI to replicate societal biases, be less accurate for lower-income families and minority borrowers, contribute to racial inequity in the criminal justice system, and infuse racial and gender bias in image generation. Consumers have already discovered that hastily crafted products do not always work equally, sometimes with unnerving results.
It isn’t enough for a new tool to complete a task for some people. It needs to fairly work for all people.
The Data Shows Americans are Worried - But Not in Every Industry
It comes as no surprise that most Americans are worried about whether AI products will be fair. By now, consumers are savvy enough to understand that algorithms are not magical decision-making machines. They predict outcomes based on past data, and past data contains all of the biased messiness of human society. Most people understand this, as 71% in our survey found that, “Artificial intelligence that is built using data from people has the risk of being biased on the basis of race, gender, age, or anything else that people can be biased about.”
But what’s the solution?
Consumers are wary of whether companies can mitigate bias on their own. Only 3% of our respondents felt that Fortune 500 companies could make responsible choices about how artificial intelligence is implemented.
Instead, one path forward is to empower consumers with “nutrition labels” about the AI they are using to promote transparency. We found that 87% of those polled agreed that “Artificial intelligence products, services, and tools should be clear and transparent about how artificial intelligence was built and what went into it.” People can only make informed decisions if they are given trustworthy information. Clear explanations of the data and bias testing of a model would go a long way towards establishing credibility by letting people know the limitations of what they are using.
By now, consumers are well aware of the issue of algorithmic bias in technology like facial recognition algorithms and recruiting software. In this low-trust environment, companies need to take responsibility to ensure fairness before bringing their product to market, and this requires radical transparency about what is being done to ensure products work for all people.
The Big Takeaway: AI Companies Need to Take Fairness Seriously
There is no single solution, but builders of AI models should keep the following things in mind:
Biased data leads to biased models. Even the smallest sources of bias can be magnified once a model is operational, and processing data beforehand to eliminate bias is a valuable step that every company can take.
It is critical to assess the fairness of the model once it’s implemented. Part of the testing of any trained model should include evaluating whether it works for everyone. This is how we can identify and avoid models that are less accurate for women and people with darker skin, or identify when hiring algorithms (like those previously used by Amazon) show gender bias.
Bias is more than just a data problem - it’s a human problem. NIST describes bias as a “socio-technical” problem that goes beyond just fixing data. These are system-level problems that require system-level solutions.
Tell us your thoughts! Which companies are great examples of ensuring fairness in their products? What steps do AI-powered companies need to take to test products before launching?
What We’re Reading On Ethical (and Non-Ethical) Tech This Week:
How Nations Are Losing a Global Race to Tackle A.I.’s Harms - New York Times
New Guide Provides Integrity Guidance for Start Ups and Early Stage Companies - Integrity Institute
Exclusive: Mozilla adds 4 new directors from diverse backgrounds to its nonprofit board in stark contrast to OpenAI - Fortune