Algorithmic Consumer Protection

J. Nathan Matias
5 min readOct 9, 2017

To manage the risks & benefits of AI, we need to look beyond the fairness and accuracy of AI decisions.

This March, Facebook announced a remarkable initiative that detects people who are most at risk of suicide and directs support to them from friends and professionals. As society entrusts our safety and well-being to AI systems like this one, how can we ensure that the outcomes are beneficial?

Facebook’s machine learning system automatically intervenes if it thinks a user is at risk of suicide.

I recently spent a weekend at the University of Michigan to discuss this question with a gathering of scholars, journalists, and civil society. As we talked, I noticed something I’ve seen elsewhere: discussions tend to focus on algorithmic fairness and discrimination. Thanks to pioneering work over the last 5 years, problems of algorithmic discrimination are starting to be understood more widely in online advertising, image recognition, logistics systems, and judicial sentencing, to name a few.

Research on fairness checks that AIs treat people fairly. I want to be sure they’re actually saving lives.

Throughout these conversations, I often feel like I’m asking completely different questions. It’s only recently that I started to find the language for what’s different. Think about Facebook’s suicide prevention initiative: while research on fairness checks that AIs treat people fairly, I want to be sure they’re actually saving lives. While a system that benefits some people more than others is a problem, we should also worry about AI systems that harm everyone equally.

(The ideas here were discussed with several people from the workshop, including Christo Wilson, Solon Barocas, Paul Resnick, and Alondra Nelson. All errors and mistakes are my own. Since the event was under Chatham House Rule, I’ll acknowledge people as they are willing to be named.)

The Utility and Fairness of Algorithms

I believe that utility and fairness are both important questions to ask of algorithms, questions that require very different methods to study.

When researchers ask if online advertising discriminates on the basis of gender or race, they’re asking questions about fairness and accuracy. Mathematicians, statisticians, and computer scientists love these questions: it can seem straightforward to audit an algorithm’s decisions under controlled environments to calculate fairness using mathematical definitions. Research on biased algorithms can often be done without even talking to people or looking at the human impact of a system, so long as you can create a statistical definition of bias or inequality (whether those definitions are adequate is a debate for another day).

To evaluate the risks or benefits of an algorithm, we need to study its impact in people’s lives.

It’s almost never possible to evaluate the utility of an algorithm by looking at the code or measuring it against a mathematical formula. To evaluate the risks or benefits of an algorithm, we need to study its impact in people’s lives, whether in controlled lab conditions or in the wider world. Here, the tools of social science help us understand how algorithms relate to our social world, impacting human outcomes at the micro and the macro levels.

I hope this chart helps researchers, journalists, and regulators ask better questions about the social impact of an algorithm. Imagine you have an AI system that detects alleged hate speech. An unfair system might make systematic mistakes that disadvantage a certain group of people, perhaps even those it’s designed to protect. A harmful system might increase extremism. As a society, we should work to create systems that are both beneficial and fair, but in most cases right now, we only ask about fairness or utility, never about both.

How to Study The Risks and Benefits of Algorithms

While questions about fairness focus on allocation and decision-making, questions about utility focus on outcomes. I want to do more writing on this topic, but for now, here are a few initial thoughts about studying the utility of algorithms.

First, causal inference is central to research on the utility of algorithms, in two ways:

  • Forward causal inference allows us to estimate the likely effect of introducing a particular algorithm (or a change to an algorithm) in people’s lives. Using randomized trials, for example, we could test whether Facebook’s suicide prevention AI saves lives on average
  • Reverse causal inference allows us to ask why the algorithm has the effects we see, helping us look for ways to tweak things to be more beneficial and more fair. We can also use these forensic methods to understand what went wrong in cases of substantial harm that we wish to prevent in the future

Quantitative methods, in isolation, are hopelessly inadequate for understanding the risks and benefits of AI in society. As we develop comprehensive approaches to ensuring the utility of AI systems, I think we have much to learn from Cialdini’s idea of full-cycle research, where ethnography, lab experiments, and field experiments support each other in building a clearer, richer understanding of an issue.

We desperately need to develop a consumer protection ecosystem for social technologies, including AI

Algorithmic Consumer Protection

I believe that researchers and journalists who work on algorithmic accountability should make consumer protection (and thus utility) a central concern in our work to manage the societal impact of AI. As I have argued elsewhere, we desperately need to develop a consumer protection ecosystem for social technologies, including AI. Just today, Pierre Omidyar, founder of Ebay and co-founder of the Omidyar Network, published a list of 6 poorly understood threats to democracy from social technology– areas where he’s been funding grants for 13 years, with limited answers. Without high quality, usable, independent research on the impact of social technologies, we lack the ability to make wise decisions for our collective future.

Consumer protection is a major motivation behind CivilServant, the nonprofit I’m starting that supports citizen behavioral science online (we’re hiring). Other recent institutions include the Alan Turing Institute (also hiring), Data & Society, AI Now (hiring) and the forthcoming University of Michigan Center for Social Media Responsibility (also hiring). To ensure the benefits of social technology and AI, we need much, much more growth in consumer protection.

Finally, I wonder if the framing on fairness risks limiting algorithmic accountability to a progressive cause. In 2017, we have to acknowledge that fairness is not a universally-held political value in Western society. Plenty of people are comfortable with judicial, financial, and immigration systems that give certain kinds of people systemic advantages and disadvantages. Even as we work on issues of fairness, I wonder if an emphasis on consumer protection (utility) would make it easier to explain the full importance of accountability.

As the public and tech industry struggle with growing doubts about the risks of social technology and AI, I remain optimistic about the benefits we can achieve together as a society. To make wise decisions and ensure consumer protection, we will need major advances in better, faster, and more widespread research on utility and fairness alike.

I’m still working through these ideas, so please share your comments and reactions. At CivilServant, I’m also looking for people who share our vision to help us find funding, choose research projects, and chart a wise course as an organization. If you or someone you know can help, please get in touch!

--

--

J. Nathan Matias

Citizen social science to improve digital life & hold tech accountable. Assistant Prof, Cornell. citizensandtech.org Prev: Princeton, MIT. Guatemalan-American