AI Requires New Ways of Building, Operating and Thinking About Technology Companies
Two recent articles from our board members demonstrate how AI requires novel approaches to uphold ethical standards
Ethical data ecosystems involve technological solutions, but answering AI’s questions will require more than just technologists.
This week we’re taking a break from going through our organization’s 5 core principles to highlight two pieces of writing from the members of our board. Loyal readers, do not fret! Next week we’ll pick up where we left things off by talking about fairness and will finish out our series with a discussion on accountability.
Piece #1: Answering AI’s biggest questions requires an interdisciplinary approach (Published in TechCrunch)
This piece was written by our board chair Tom Chavez, who is also the co-founder of super{set} and the CEO and co-founder of Boombox.io. It makes the case that AI companies need a diverse set of viewpoints in order to navigate the technology’s unique alignment risks.
Drawing from his experience double-majoring in computer science and philosophy, Tom reflects on how foundational questions on the nature of humanity and reality can inform how we build technology and avoid treacherous outcomes.
As part of this argument, he argues we shouldn’t just look at what companies are building, but also at who is leading the efforts and whether or not they are equipped to think through the thorniest ethical questions. His “dream team” corporate roster for thoughtful company leadership would include a Chief AI and data scientist, a Chief philosopher architect, and a Chief neuroscientist to make sure our technology works to benefit society.
The big ethical question: What are the blindspots of traditional tech leadership? Who should have a seat at the table to help make better-informed decisions?
Piece #2: How to Design an Ethical Data Ecosystem for the AI Era (Published in The Messenger)
This article was authored by two of our board members, Maritza Johnson and Sara Watson. Maritza is the founding executive director of the Center for Digital Civil Society at the University of San Diego after time at Facebook and Google, and Sara is the founder and chief analyst at SMW Insights, and she is currently a Siegel Research fellow at All Tech Is Human. She has worked as an industry analyst at Forrester, Insider Intelligence, and Gartner.
They draw on their experience in tech companies to explain how the centrality of data in modern organizations means that companies have to be thoughtful in building data ecosystems that protect people’s privacy and dignity. This requires more than just commitments, it takes intentional action, resources, and privacy-by-design systems to build the technology and norms to keep people’s information safe.
Check out the piece to learn more about how each of our 5 principles is relevant for navigating the ethical challenges of AI and making sure this emerging technology promotes human dignity instead of eroding privacy rights.
The big ethical question: What steps do companies need to take to protect people’s data?
Tell Us Your Thoughts:
What other disciplines are valuable for understanding AI? Are there any examples you love of companies that treat their user data well?
What We’re Reading On Ethical Tech This Week:
The Google Trial Is Going to Rewrite Our Future - Tim Wu, NYT
United States takes on Google in biggest tech monopoly trial of 21st century - NPR
Reality Check: How to Protect Human Rights in the 3D Immersive Web - NYU Stern Center for Business and Human Rights
Reset and Reinvent: The Thriving Landscape of Tech Innovation - Bain and Company Technology Report 2023