Getting your Trinity Audio player ready...

After the deadly Capitol insurrection on Jan. 6 by a faction of Donald Trump supporters, major social media companies took the unprecedented step of banning a sitting U.S. president from their platforms.

Now, companies like Facebook are grappling with how to effectively moderate content to prevent future violence while politicians from both sides of the aisle consider policies to regulate social media platforms from spreading misinformation without limiting free speech.

On Jan. 22, during an online panel titled “ The Storming of the Capitol and the Future of Free Speech Online,” four experts from Stanford University’s Cyber Policy Center, which focuses on digital technology and government policy, discussed how social media platforms have helped cultivate political radicalization and extremism, the potential consequences as these same platforms aggressively crack down on false information and the government’s role in regulating social media in the near future.

The experts found there’s an incredibly challenging feat ahead for both entities.

“When it comes to incitement, it’s very, very difficult to develop a clear concrete standard that will apply prospectively to any type of situation that might lead to law-breaking or violence,” said Nathaniel Persily, a co-director of the center.

To understand what led to the deadly insurrection on Jan. 6, Renée DiResta said it helps to know that the event was not an overnight result of online coordination of one, large group of Trump supporters or conservatives.

“This is not one faction, if you will, this was multiple factions that came together,” said DiResta, a research manager at the Cyber Policy Center’s Internet Observatory. “So there’s a need to understand ways in which network activism online manifests and ways in which these factions form.”

DiResta suggested the event reflected a process of polarization that was years in the making and included various groups such as militias, white supremacists and, more recently, followers of the far-right conspiracy theory known as QAnon. Groups like these can occupy “echo chambers” further strengthened on online platforms, she said. That, coupled with an effective disinformation campaign, where Trump and his allies questioned the integrity of the U.S. election system based on misleading and false information, demonstrated how social media played a role leading up to the insurrection.

“There was this repetitive process that we saw over and over again for months in which an incident — an incident that was documented, it really happened in the world — was recast as part of a broader narrative, and then sometimes those narratives were additionally recast into the realm of conspiracy,” DiResta said.

This process was well documented through research conducted by the Election Integrity Partnership, a coalition that’s composed of Stanford and other research groups.

Their analysis found cases where a real image of ballot envelopes from the 2018 midterm election in a dumpster or a video of a person who appears to be collecting or delivering absentee ballots on behalf of another person — sometimes called “ballot harvesting,” which is legal in some states — were misleadingly packaged as evidence of massive voter fraud. They were then amplified through social media accounts owned by right-wing media outlets, conservative influencers and, as shown in these two cases, Trump’s son, Donald Trump Jr., who has 6.6 million followers on Twitter.

“For people who occupy certain echo chambers, this is what they saw over and over and over again,” DiResta said. “So when Trump’s loss manifested, they were primed to believe that this was a result of a massive steal … (and) that generated extraordinary amounts of anger.”

Transparency efforts

Prior to the Capitol riot, and even before the Nov. 3 election, Facebook and other social media companies made efforts to combat misinformation on their platforms. Twitter slapped fact-check labels on tweets; Instagram attached links to official information on COVID-19 and the U.S. election underneath users’ photos; and Facebook temporarily tweaked its news feed algorithm so that news from more reliable publications were more prominently displayed.

In October, Facebook said the company’s measures led to promising results, touting it had removed 120,000 pieces of content that violated its policies on voter information and promised to do more.

But this type of content moderation, leading up to the outright ban of Trump and some of his allies, increasingly pushed many conservatives who felt they were censored by tech companies to make the digital exodus to other platforms such as Parler, which advertised itself as a free-speech friendly platform. Parler’s app, at one point No. 1 on Apple’s and Google’s app stores after the election, was shut down when Amazon barred the site from its web-hosting services on Jan. 9.

This hasn’t stopped other platforms like Gab from growing as it seemed to target disillusioned conservatives by similarly calling itself the “free speech social network.” Nothing in U.S. law makes it explicitly illegal to give a certain group a platform, even at the risk of hosting smaller, “domestic extremist groups,” said Alex Stamos, director of the Cyber Policy Center’s Internet Observatory and former chief security officer at Facebook.

“You’re going to continue to see the separation from the companies that are trying to go after the (extremist) groups versus those that aren’t, which is not something I think we actually have a good history of or demonstration of what’s going to happen,” he said.

‘You’re going to continue to see the separation from the companies that are trying to go after the (extremist) groups versus those that aren’t.’

Alex Stamos, director, Internet Oberservatory at Stanford Cyber Policy Center

DiResta, however, noted that although a large number of popular conservative influencers and their followers made the recent move to other social media and messaging sites, what also needs to be accounted for to measure the long-term impacts of the migration is engagement between those users.

“Account creation isn’t the only metric,” she said. “The question becomes: Do we see sustained engagement on those platforms? Did all of the millions of accounts that were created … actively continue to participate?”

Larger social media and tech companies have already applied comprehensive moderation policies and many are also members of the Global Internet Forum to Counter Terorrism. Stamos believes that rather than going back to a normal where, for example, baselessly accusing voting machines of deleting votes can be considered “acceptable political discourse,” these platforms will likely have to keep up or increase moderation of content, fact-checking and rule enforcement as it did during last year’s election and after Jan. 6 riot.

A tussle between law and tech

From a U.S. legislative standpoint, there’s also the question of what laws need to be considered or amended to regulate forms of speech that could incite violence from proliferating, mainly Section 230 of the Communications Decency Act of 1996, which has come under increased scrutiny.

The law essentially protects internet platforms from assuming responsibility for the speech of its users, including hate speech, which is protected by the First Amendment. There are exceptions to the case, including intellectual property or content that may violate federal law such as sex-trafficking material.

Daphne Keller, director of the Cyber Policy Center’s Program on Platform Regulation and former associate general counsel for Google, said Congress has introduced over 20 bills in the past year that would amend Section 230 in different ways.

But major “constitutional hurdles” stand in the way of regulating speech that may incite violence through laws that are effective and won’t violate the First Amendment, said Keller, who elaborated on the topic in a Jan. 22 post on the center’s blog.

Scholars from Stanford University’s Cyber Policy Center discuss the government’s role in regulating social media, among other topics, at a Jan. 22 online forum. Courtesy Stanford University’s Freeman Spogli Institute YouTube channel.

Legislators do have some legal precedents to start from. The most relevant is the Brandenburg v. Ohio case, where the Supreme Court ruled the First Amendment does not protect speech that is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”

Persily, the co-director at the Cyber Policy Center, who is also a constitutional and election law expert, finds that to apply the case to speech online, one question begs to be asked: At what stage can companies know some form of speech will lead to imminent lawless action or violence?

“What kind of judgments do (platforms) need to make in order to really have good forecast about the likelihood of imminent lawless action,” he asked. “It’s almost always going to be too late.”

Once legislators can decide on the kinds of speech that should and can be prohibited, they’ll also have to figure out how to hand this responsibility to private internet companies.

“If you take a pretty vague rule prohibiting speech and then you outsource it to risk-averse platforms … they will over enforce and the overenforcement may hit people that we don’t like today and people that we do like next week,” Keller said. “One group of people we can pretty strongly predict that it will hit is members of vulnerable minority groups.”

‘What kind of judgments do (platforms) need to make in order to really have good forecast about the likelihood of imminent lawless action?’

Nathaniel Persily, co-director, Stanford Cyber Policy Center

More than two weeks after the Capitol riot, Facebook announced on Jan. 21 that it will defer its decision to permanently ban or restore Trump’s account to the company’s independent Oversight Board. The group, which was first officially announced last May, is made up of global experts and civic leaders who take on “highly emblematic cases” that need further examination if Facebook made decisions, such as the Trump ban, in accordance with its own policies, according to the board’s website.

On that same day, a group of 40 House Democrats led by Anna Eshoo, D-Palo Alto, and Tom Malinowski of New Jersey submitted letters to the CEOs of Facebook, YouTube and Twitter, accusing the platforms of helping to foster the “insurrectionist mob” and urging the executives them to re-examine their algorithms that “maximize user engagement.”

It’s a follow-up to a bill the two House representatives proposed in October, Protecting Americans from Dangerous Algorithms Act, which amends Section 230 to hold internet platforms accountable if their algorithms boost content that violates or interferes with civil rights. In other words, it’s an attempt not to regulate speech, but to regulate the reach of speech, which Kellers believes platforms have the ability to execute but currently can’t be enforced through U.S. law without First Amendment scrutiny.

“The value in identifying these barriers is to figure out how to get around them,” Keller said. “If we want a good law, we need to understand the hard limits. And the hard limits are: What is actually implementable … and what will get struck down by the courts.”

Join the Conversation

No comments

  1. Oh look, my comment was censored because it… opposed censorship by tech giants. Specifically, “deplatforming” was used against the MAGA crowd, but is now already being weaponized against leftists.

Leave a comment