Social intelligence
Social media is hurting our society, and a solution starts with the algorithms.
Hi again, everyone, and happy September. Where did summer go?
I had the opportunity to reflect on my first month of posts during the Labor Day weekend, and I hope to bring some additional focus to my newsletter topics in the coming weeks. Expect to read more about the changes in our technology and society that are occupying my headspace, especially as we head into a consequential election and our third season with COVID-19.
Before I begin this week’s post, I want to take a brief moment and send positive vibes to my friends (and everyone else) on the West Coast dealing with the latest round of wildfires. Stay strong.
And now, onward to my thoughts of the week…
Face off
I’ve spent a lot of time this year examining social media’s impact upon our national discourse around everything from politics to public health. So naturally, I tuned in last week when I heard that Axios co-founder Mike Allen was interviewing Mark Zuckerberg about election interference and regulating misinformation.
Although the conversation was interesting, I came away thinking that the most insightful part of the broadcast was actually the interview’s animated preamble:
Source: Axios on HBO
In this cartoon graphic, Zuckerberg is depicted with an oversized “Like Button” attempting to squash nefarious actors (and Joe Biden…) who keep appearing from holes in the ground only to disappear again when Zuckerberg gets close to them.
The metaphor of whack-a-mole is quite apt. Throughout this year, Facebook and other social platforms have taken unprecedented actions to quell conspiracy theories and misinformation that are spreading on their platforms — yet it seems that bad actors are always a step ahead.
Here are just a few examples:
In July, a video from “America's Frontline Doctors" claiming hydroxychloroquine is "a cure” for COVID-19 and "you don't need masks” gained 14 million views in just 6 hours on Facebook.
A few weeks ago, Zuckerberg admitted an “operational mistake” meant that Facebook failed to remove an event page encouraging armed militia members to descend upon Kenosha, WI, and participate in counter-protests of the police shooting of Jacob Blake. The event attracted at least 2,600 responses.
Just last week, misinformation around the source of the Oregon wildfires began to go viral on social media. One post falsely claiming that Antifascist protestors deliberately set fires in Oregon was shared nearly 10,000 times on Twitter and reached up to 680,000 people on Facebook.
Although Facebook and other platforms are pouring resources into content moderation to prevent these types of issues, I fear it will never be enough. Social media is designed to make these incidents of misinformation and inflammatory content go viral.
Fanning the flames
Social media platforms such as Facebook rest upon a foundation of algorithms that promote content that is most likely to generate engagement.
Engagement is driven by emotions, and some feelings are more effective than others at prompting us to click, like, and share. From a 2012 study by Wharton professors Jonah Berger and Katherine L. Milkman:
“Positive and negative emotions characterized by activation or arousal (i.e., awe, anxiety, and anger) are positively linked to virality, while emotions characterized by deactivation (i.e., sadness) are negatively linked to virality.”
As the researchers point out, activating emotions don’t have to be negative — and neither does engaging content. In fact, the majority of engaging content might be completely innocuous. Most of the posts from friends and family in my Feed inspire emotions ranging from joy (when I see photos of my cousin’s dog) to pride (when my Dad shares an interesting fact about family history) that activate me to engage in positive ways.
But Facebook is a massive platform with 2.5 billion monthly active users. There are 317,000 status updates; 147,000 photos uploaded; and 54,000 shared links added to the platform every 60 seconds.
Each one is fed into an algorithm whose purpose is to promote it as far and wide as possible if the content is likely to generate activating emotions from its audience. Even if only a small fraction of those posts inspire activating emotions that are negative — anger, anxiety — that’s still a tremendous amount of potentially harmful content that Facebook’s algorithms are causing to go viral.
And it’s not just content, but also groups that are promoted through Facebook’s recommendation engine. In fact, the company’s own research from 2016 concluded, “64% of all extremist group joins are due to Facebook recommendation tools.”
To be fair to Facebook, this isn’t their problem alone. Algorithmic recommendations also drive 70% of the time spent watching videos on YouTube, many of which promote conspiracy theories that are harmful but inspire activating emotions.
Social media leaders understand the problem. As Zuckerberg wrote in 2018 (I’ve bolded text to add emphasis):
One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.
His post was accompanied by an infographic showing that engagement will increase exponentially up to the point at which content would be prohibited for violating platform standards:
Source: Facebook
Because social media algorithms promote the most engaging material, this also means that content is more likely to achieve maximum reach by coming ever closer to inciting violence and jeopardizing public health or safety.
Facing the music
I am very concerned about what this means for liberal democratic norms and institutions.
Norms such as freedom of speech and freedom of assembly are at risk if platforms and politicians become overly prescriptive about the types of content allowed on social media.
At the same time, institutions such as democratic elections and an independent judiciary will not withstand the breakdown in social trust that will ensue from the continued spread of lies and inflammatory content.
Rather than focus on moderating the content itself, however, I would argue that regulation should start with the algorithms. After all, there has always been a market for political misinformation, conspiracy theories, and hate speech — but engagement-based recommendation engines have drastically expanded that market by making it possible to reach more people, faster than ever before with content that inspires activating emotions ranging from outrage to terror.
We need a new way of organizing social media feeds that respects and perhaps even encourages content that inspires deactivating emotions — serenity, self-reflection, and, yes, even sadness. The issues at the heart of our political and social discourse are complex and nuanced, requiring citizens to approach them dispassionately without the expectation of quickly finding the right answer or someone to blame.
Social media environments without engagement-maximizing algorithms would be less instantly gratifying for users, and they would certainly be less profitable for technology companies. But they would be better for democracy, liberal values, and social trust — all of which technology leaders claim to care about, and which I believe are more important than any company’s monthly active users or earnings per share.