How Anyone Can Audit Facebook’s NewsFeed
All you need for citizen behavioral science is a spreadsheet, patience, and longsuffering friends
How do small changes to Facebook affect your life? And how would you know if they did?
Ever since auditing Facebook’s emoji pride button with Aimee Rickman and Megan Steiner last June, I’ve been looking for other easy, powerful ways for anyone to study the impact of Facebook’s changes in our lives. How much can a single person learn about Facebook with a little patience and a spreadsheet? More than you might expect!
(Update 2018: read about my followup experiment in Attributing Cause in Algorithm Audits. In May, I shared combined results from 11 of these audits)
Do Colored Backgrounds Influence Who Sees & Likes Status Updates?
As you might have noticed, Facebook currently offers people an optional background canvas for short status updates. I have been seeing more of these and was wondering what might be going on. Is this just a way to make your feed look nicer, or might it have an effect on how your friends respond?
In recent years, news publishers, political campaigns, and marketers have become very good at crowding out people’s feeds. As a result, Facebook is currently struggling to help people balance the visibility of personal updates with promoted material. Might the company be providing these backgrounds to attract more attention to personal updates?
Online attention is a classic tragedy of the commons; everyone has incentives to acquire and exploit as much attention as possible. But it doesn’t have to be a tragedy, as Nobel laureate Elinor Ostrom discovered in her research: citizen groups can play an important role to monitor, manage, and transform common resources.
What might a citizen behavioral scientist do to make sense of this issue? One option would be to monitor what Facebook shows you. You could even compare what you see to what Facebook could have shown you, as some researchers and designers have done. You can also conduct your own experiment, as I did, to discover the effect of using these backgrounds on the attention that your updates receive.
How I Tested the Effect of Colorful Backgrounds on Attention Toward Poetry in my Facebook Feed
True or not, I’ve personally felt that my Facebook feed has become more filled with news articles, charity fundraisers, and product ads over the years. Many of my friends are activists, journalists, researchers, and nonprofit employees, so it makes sense that they share these things. I probably shouldn’t admit this as a researcher of civic media, but I was becoming emotionally worn down by days where all I saw were arguments, fear, and outrage, however justified.
This fall, inspired by the movement to Occupy Facebook with Art and my time facilitating The Atlantic’s Twitter book club, I decided to share a poem a day on Facebook. It was also a perfect opportunity to ask my question about the NewsFeed: does using a colorful background increase interactions with text updates on Facebook?
To ask this question, I needed to set up what researchers call a field experiment, a way to test this question out in the world. The experiment would allow me to compare what happens when I use a colorful background to poems when I don’t. Here’s what I needed:
- something to measure that was meaningful to my question
- a way to record what I measured
- a guide for choosing what background I should use on a given day
- a personal rule to avoid interfering in the results
- a way to compare the conversations
In my experiment, I measured the total number of comments and likes that a poetry discussion received, including likes on comments in the discussion. But measurement didn’t have to be a number. As Betsy Paluck has pointed out, I could have written notes each day about how the poetry conversation went and compared those notes.
I recorded my measurements in Google Sheets. Each time I posted a new poem, I counted the previous day’s comments and likes and added them to the spreadsheet. I also recorded the date (and time) that I shared the poem, along with the poetry text. For privacy reasons I avoided recording any personal information.
To be confident in what I learned, I needed something to guide my daily decisions to use a colored or plain background. If I relied on my own choices, my conscious or unconscious decisions might lean toward a specific result. For example, if I tend to reach for colorful backgrounds on cloudy days, my friends might also be affected by the weather rather than the background. The experiment would then show a correlation, but not causation. By using a random number to guide decisions, I could discover the average effect.
The easiest way to decide would have been to flip a coin, and that’s the easiest option if you try this yourself. In my experiment, I wanted an equal number of plain and colored backgrounds, to be even more sure about the result. So I created a spreadsheet tab with two columns: a column for which background to choose (“Condition”), and a column with a random number (using randbetween).
In the “Condition” column, I filled half of the rows with the word “Color” and half with “Text.” Then, by sorting the spreadsheet on the random number, I got the software to randomly decide which background to use each day.
I also needed a personal rule to avoid interfering with the results. During the experiment, I remember wishing that I could use a colorful background on poems that I especially loved. When nobody liked or commented a poem, I was tempted to promote it further. While it’s normally quite reasonable, it would also spoil the experiment. I did allow myself to respond after a day had passed and I had already recorded my measurement.
Finally, I needed a way to compare the conversations. While I could always just compare the average likes and comments in the spreadsheet software, I wanted to learn if any differences were statistically-significant–if I could reject the possibility that any difference was just a freak of chance. This last step is the only one that I couldn’t do in Google Sheets (though it might have been possible with the statistics add-on).
For those of you who are interested, I used a linear regression that predicted the log-transformed number of interactions based on the day of the week and whether it had a colored background. A negative binomial model produced a similar result.
lm(log1p(interactions) ~ Condition + factor(wday), data=poems)
What I Discovered in My Personal Experiment
Over 22 days, I learned that using colored backgrounds caused my friends on Facebook (I have just over 2,000) to like and comment on poems 2.1x more. Poem conversations with a plain background received 7 likes and comments on average, ranging from 0 to 15. With or without colored backgrounds, poems were often the least liked posts on my feed for that day, but I knew going into this that not everyone likes poetry.
What Did And Didn’t I Learn?
In a conversation about these results on Facebook, many of my friends asked further questions:
Do colored backgrounds get more likes and comments because of Facebook’s Algorithm or people’s behavior? Friends wanted to know if Facebook’s algorithm is prioritizing all colored backgrounds, or if the larger, catchier poems were just attracting more attention. This is what researchers call a “causal mechanism.”
Whether or not a study reveals the full reasons for an effect, research like this one still makes a meaningful contribution to knowledge .In the 1700s, James Lind was able to prove that oranges and lemons could cure scurvy, but he couldn’t explain why. Over the next 175 years, scientists had to discover the existence of vitamins, synthesize Vitamin C, and test its effects on Scurvy before this question could be answered.
What is the meaning of what I measured? Friends debated what it means to count comments and likes. They pointed out that I had no way to measure what my friends saw on social media, and that the feed wasn’t the only way that people learned about the poems. Many of these conversations seemed to be about the previous question: whether the study was measuring people’s behavior or the algorithm’s behavior. Untangling human and algorithm behavior is an unsolved problem in science right now, so I was delighted to see this conversation happen.
How did people experience the poetry? This month-long project led to a long, wonderful conversation about how to read poetry online, and what we look for from poetry in our lives. Some appreciated the chance to see the poems even if they didn’t click on them. Others refused to click the “like” button because it seemed like my posts cheapened their meaning.
Should I have asked for permission before before trying to bring more poetry into our lives?
Was this project ethical? I also debriefed my friends for a discussion about the ethics of my experiment. Should I have asked for permission before trying to bring more poetry into our lives? One friend, a medical researcher, sent me disappointed private messages. Others were intrigued or delighted. When I offered to remove anyone’s data from my research, no one took me up, not even the friend who questioned the project’s ethics. If you decide to try this yourself, maybe consider talking about it with your friends in advance so it comes as less of a surprise.
What didn’t I expect? My friend Scott sent me a screenshot of emails that Facebook sent him, encouraging him to look at my poems. All this time, I had been focusing on the Facebook website or app, and I had forgotten that Facebook has many ways to direct people’s attention.
Try This Experiment Yourself
If you’re curious to ask this question, why not try a similar experiment yourself? Your question need not be about poetry background colors. You might decide to praise the people you admire or write about what you had for lunch. Anything can be an experiment, if you follow these basic steps and go about it with respect for the well-being of others.
You might want to test the effect of Facebook’s new Snooze feature, or different ways to set up conversations about politics. You could coordinate with a group of friends to see what happens if you try to break through the political bubble of the news you read, like Janet Xu, Matt Salganik, and their students did last year.
I know the stats can be tricky, so if you do decide to do your own experiment like mine (what researchers call a replication), I can do the statistics. If enough people reach out, I might even write code that automatically calculates results from a link to Google Sheets. Just message me–I’m @natematias on Twitter.
Citizen Behavioral Science for a Fairer, Safer, More Understanding Internet
Why did I do this project? I believe that a better internet for all needs digital citizens who imagine new ways to improve online life, test them out, and hold tech companies accountable for their power in our world.
I have often argued that we need independent testing of social tech, especially when a company’s promises are great or the risks are substantial. Sometimes when I suggest this, academics respond that independent evaluations require long, complex work by experts. That’s not always the case.
While I agree that we need more resources for independent, public interest internet research, citizen science is a powerful way to manage the common good. My new nonprofit CivilServant is working to grow that vision.
If you’re interested in citizen behavioral science, send me a note (I’m @natematias on Twitter). CivilServant is a young project, and we can use all the help we can get!
Next month, you can also learn more at the public CivilServant Community Research Summit on January 27th from 1–6pm at the MIT Media Lab in Boston. Tickets are free (register here)!
Acknowledgments
First of all, I’m incredibly grateful to my longsuffering friends, who participated in this experiment with me. Thanks everyone! ❤ 📈
This project was inspired by Ben Goldacre’s NESTA-funded Randomise Me project (github), which was a website to help anyone run their own randomized trials. Special thanks also to Anneli Hershman and Joshua Cowls, for early conversations two years ago when we designed the Cornhole Experiment.