Remaking Large-Scale Behavioral Research for Democracy: New Paper at CHI 2018

Field experiments can guide wise use of platform power if we re-design the relationship between democracy & behavioral science

J. Nathan Matias
7 min readDec 14, 2017

It’s time to admit that designers and internet researchers have become powerful policymakers governing human affairs. As social platforms and intelligent agents become routine in the daily life of billions of people, the public has come to expect these systems to address deep-seated social ills.

Tech companies are currently expected to manage social problems including terrorism, discrimination, suicide, self-harm, eating disorders, hate speech, child pornography, misogyny, copyright violation, and political polarization, to name a few. Advocacy organizations have even opened lobbying wings in San Francisco, hoping to influence company policies.

Over the years, I’ve argued that we have an obligation to test the risks and benefits of social interventions online, but there’s a catch: behavioral experiments tend to be designed for top-down control rather than a democratic society.

Behavioral experiments tend to be designed for top-down control rather than a democratic society

This week, CHI2018, the premier academic venue in human-computer interaction, accepted a paper by me and Merry Mou that reports on the last two years of our work to redesign large-scale experiment software for democracy. Here’s a pre-print version:

Matias, J. N., Mou, M. (2018). CivilServant: Community-Led Experiments in Platform Governance. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM. (forthcoming)

If you have time, we encourage you to read the full 10-page paper. We try to offer an honest report on the ethical and political values of our project, a rough roadmap on big challenges, and progress on the messy work of remaking behavioral policy to be citizen-led. This isn’t a topic with easy answers, so I encourage you to read our paper for more about the complex lessons we learned.

How Do Experiments Happen Online?

Behavioral experiments are now a common part of social tech; you’ve probably been in a few dozen experiments already today.

Software engineers and designers now work in a process of “continuous experimentation’’ that in some companies test tens of thousands of design interventions per year. These platform-centered experiment infrastructures tend to have common goals: making field experiments an efficient part of software quality testing and making behavioral experiments accessible to engineers and designers without social science training.

A recent paper by the Bing team reports on their last 21,220 behavioral experiments. In large companies, researchers don’t think so much about individual experiments but populations of experimenters who are constantly testing many different ways to influence millions of people. A full research cycle takes 1–2 weeks.

These streamlined experiments depend on keeping participants uninvolved and unaware of research. None of these systems have publicly-documented features for informing or debriefing users; deception-based studies (which are defensible in some circumstances) are the default. Except for rare scandals and a few efforts by corporate researchers, all of this research remains a trade secret, away from public awareness or accountability.

Because companies do so many experiments in secret, even the best-intentioned teams end up with disproportionate behavioral power compared to the people who use their platforms. On Uber for example, this behavioral science information asymmetry has allowed the company to allegedly cause its drivers to act against their own interests to increase Uber’s profits.

Because companies do so many experiments in secret, they end up with disproportionate behavioral power

Despite this immense capacity for behavioral research, tech companies continue to get caught out by the societal harms enabled by their products. When they do find solutions, their research remains a trade secret.

How can we think about this situation, and how might we remake online behavioral research to benefit society more reliably and accountably ? Fortunately, we’re not the first to ask this question.

Behavioral Research in an Open Society

In the paper, we revisit two leading 20th century thinkers who had grave doubts about the role of social experiments in democracy, and who became founding figures in philosophy and behavioral policy: Carl Popper and Donald Campbell.

In open societies, social experiments support the public to evaluate government policies “so that bad or incompetent rulers can be prevented from doing too much damage.’’

In The Open Society and Its Enemies, Karl Popper writes about the uses of behavioral research in social policy. Writing from New Zealand in exile from Nazi-controlled Austria, Popper describes social experiments in what he calls “open’’ and “closed’’ societies. In closed societies, paternalistic experts use science to shape public behavior toward their own goals, justifying their actions with the argument that “the learned should rule.” In open societies, social experiments support the public to evaluate government policies “so that bad or incompetent rulers can be prevented from doing too much damage.’’

Popper saw statistical tests and the rejection of null results as a deeply political activity. He argued that experiments are more than a way to understand behavior; they are political systems for social improvement through democratic rejection of ineffective policies and leaders. For that to happen, the public needs to shape the research, know the results, and have real political power over decisions. Without citizen power, behavioral experiments become another tool of authoritarian power.

while ignorance of policy outcomes is a serious peril, it is also perilous to develop and use experimental knowledge apart from democracy

Fifteen later, the methodologist and founding figure of policy evaluation Donald Campbell described a practical vision for social experiments in an open society. By 1971, the U.S. government was already converting record-keeping to thousands of IBM 3/60 systems, imagining the use of data to improve education, fight poverty, and usher in a “Great Society.’’

US National Security Agency System/360 85 Console in 1971. Image source: NSA via Wikimedia Commons

As the U.S. government adopted randomized trials from Campbell’s textbook, he worried that government experiments would threaten the “egalitarian and voluntaristic ideals” of democracy. “Is the open society an experimenting society?” he asked, implying that it might not be. Campbell argued that while ignorance of policy outcomes is a serious peril, it is also perilous to develop and use experimental knowledge apart from democracy.

In a 1971 lecture “The Experimenting Society,” Campbell proposed statistical and social processes for democratic field experiments. He proposed research where citizens are “co-agents directing their own society,” defining goals, shaping variables, designing interventions, and interpreting, re-analyzing, and debating results. Campbell also anticipated today’s replication crisis, suggesting that community-led experiments and open data could dramatically increase the quality of policy evaluation and the social sciences.

While researchers passed around photocopies of Campbell’s lecture for decades, citizen-led experiments and data analysis seemed impractical in the years before the personal computer and the public internet. CivilServant is directly inspired by Campbell’s idea of a democratic experimenting society.

Four Challenges for Community-Led Behavioral Science

In the article, we outline four large unsolved challenges for anyone trying to design field experiments for an open society.

Community Participation: Any process for evaluating social interventions will structure power in some way. That quickly becomes complicated online, where some of the least empowered people are those who allegedly organize to harm others. Because our work often focuses on risk and harm, no process can protect the most vulnerable and also guarantee equal participation. With CivilServant, we borrowed ideas from urban planning to guarantee a baseline of rights and respect, while also focusing on our core goals of a fairer, safer, more understanding internet. We’re constantly trying new approaches.

Research Ethics: We’re glad that computer scientists and political scientists are rethinking the ethics of social experiments. But calls for ethics don’t go far enough; they imply that researchers rather than the public should be the ones to hold research accountable. With CivilServant, we’re trying to invent new procedures for research accountability and test them empirically.

Open Knowledge and Transparency: We created CivilServant to generate open knowledge. But behavioral data can also be incredibly sensitive. For our work to be truly accountable to the people we serve, we need to share our research data. To achieve that ideal, we need ways to reliably protect people’s privacy while opening our work to scrutiny. For now we keep our research data private and require university ethics approval for re-analysis of our data.

Deliberative Replication: In Campbell’s experimenting society, randomized trials are a plentiful form of knowledge generated by citizens who develop their own local knowledge rather than rely on studies conducted elsewhere. We have designed CivilServant to support these community replications, and I’m hoping we’ll have more results to share from this process in 2018.

What You’ll Learn in Our Article

This post is just a teaser for what we’ve written in our article about CivilServant. If you read the full paper, you can learn:

  • How does the CivilServant software actually work?
  • What kinds of experiments can CivilServant support?
  • What is the process for a community to work with CivilServant?
  • How have communities on reddit used CivilServant to test their policies?
  • How did subreddits react in community debriefings about research results? How do they debate ethics, policies, and research methods?
  • How have platforms and communities made use of our research findings?
  • What can designers of other experimentation infrastructures learn from our experience?
The Community Knowledge Spiral is one way to think about the research process supported by CivilServant

The Future of Community-Led Experiments

Can community-led experiments ever reach the scale required to meaningfully-advise the use of platforms to govern society in an open, democratic society? With CivilServant, we have shown that it’s possible to redesign experimentation infrastructures for an open society. Given the implications for human flourishing and freedom, we call for further progress on the politics and design of online experiments.

CivilServant is now becoming a nonprofit incubated by GlobalVoices, and we have some initial funding from the Ethics & Governance of AI Fund, the Kahneman-Treisman Center for Behavioral Policy at Princeton, and the Mozilla Foundation. We’ve recently hired our first two engineers and will soon be announcing a research manager position. Thanks to my two-year post-doc at Princeton in the Paluck Lab, CITP, and Sociology, we have a runway to continue the project.

If you’re interested in our work to re-make behavioral science in a digital era, I would love to talk. You can find me on Twitter at @natematias and at my Princeton email address.

--

--

J. Nathan Matias

Citizen social science to improve digital life & hold tech accountable. Assistant Prof, Cornell. citizensandtech.org Prev: Princeton, MIT. Guatemalan-American