How Meta Spins the Uncertain Science of AI Safety

Firms spin scientific results as proof of safety while also undermining the generalizability of the findings. Where do we go from here?

J. Nathan Matias
3 min readSep 29, 2024

--

Do a round of scientific studies show, as Meta claims, that Facebook algorithms didn’t cause much harm during the 2020 election? Or should we believe the scientists themselves, some of whom disagree?

This controversy became more complicated this week with news that the company was changing the algorithm underneath the researchers while it was happening — making it hard to draw any general claims.

Sadly, scientists could have predicted this conflict before the 2020 election project even got underway.

To understand why, you have to understand how recommender systems work. Recommender systems are assemblages of many models, which engineers/data scientists regularly update, as Arvind Naraynan writes in an excellent summary for the Knight First Amendment Institute.

Arvind Narayanan’s summary of how social media algorithms lead to emergent effects

This complexity makes it unlikely for a study to have a consistent “intervention” throughout. Randomized trials with recommender systems are unlikely to be internally consistent, since the algorithm keeps changing. In my 2016 work on influencing recommenders to reduce the spread of unreliable news, I had to restart a study when Reddit’s algorithm changed. I used this opportunity to write about the generalizability problem. While scientists on the 2020 Election project carefully noted this issue, Meta however has not been so honest about the results of the 2020 studies.

Changing algorithms create another problem: research findings may not apply for long, even if they are valid. That’s something that I along with Rob Kitchin, Kevin Munger, and others have been arguing for years, in my case across multiple field experiments and an article in Nature.

Until we solve this problem of intervention validity in independent research about adaptive algorithms (like recommender systems), scientists won’t be able to provide reliable answers about the effects of a system that can be used to guide future decisions. And as we have seen in this week’s news, companies are not incentivized to produce that knowledge:

  • their business models require them to change the algorithms regularly
  • the algorithms adapt to changing user behavior and contexts
  • firms have reputational and regulatory incentives to support strong but unreliable causal claims

The authors of the scientific studies in the 2020 Facebook Elections project are aware of this and wrote very clear qualifications to their findings. And the critics are right to point out this important issue. Rightly or wrongly, people expect randomized trials to provide gold-standard answers that stand the the test of time. That’s how Meta PR is trying to spin the results, and unfortunately, the state of the art in science does not make such strong claims possible at this time.

I see this as an important challenge for science, policy, and for tech leaders who are willing to think long-term about the health of the industry. Society needs causal knowledge, and we need to create the conditions for independent knowledge that people can trust, as I have been arguing for nearly a decade now.

The good news is that these valuable 2020 election studies do actually move science forward toward answers, even if they can’t provide those answers.

In the meantime, policymakers and the public are right to be concerned about this mismatch between what they expect from randomized trials and what they got from this project — in addition to their reasonable fears of corporate skullduggery, whatever the inside story is. Despite frustrating and painful disagreements, I hope this debate galvanizes people to collaborate and moves the conversation forward.

Image source: Stock Catalog

--

--

J. Nathan Matias

Citizen social science to improve digital life & hold tech accountable. Assistant Prof, Cornell. citizensandtech.org Prev: Princeton, MIT. Guatemalan-American