Joe Raedle/Getty Images
With the midterms two months away, tech companies are gearing up: rolling out fact checks, labeling misleading claims and setting up voting guides.
The electoral playbooks they use Facebook, Twitterowned by google Youtube other tik tok they are largely in line with those they used in 2020, when they warned that both foreign and domestic actors were seeking to undermine confidence in the results.
But the wave of falsehoods in the wake of that election, including the “big lie” that Donald Trump won, has continued to spread, espoused by hundreds of republican candidates on the ballot this fall.
That leaves experts who study social media wondering what lessons tech companies have learned from 2020 and whether they’re doing enough this year.
The plethora of election-related ads in recent weeks adds to a “business as usual” approach, said Katie Harbath, a former director of election policy at Facebook who is now a fellow at the Bipartisan Policy Center.
The return of family playbooks
Platforms are largely taking a two-pronged approach: squashing misleading or outright false claims and ramping up authoritative reporting from local election officials and reputable news sources.
In the first case, all four major platforms rely on labels to point out falsehoods and, in many cases, direct users to fact checks or accurate information. In some cases, users will not be able to share tagged posts and the platforms themselves will not recommend them. YouTube, Facebook and TikTok also say they will remove some specific false claims about voting and threats of violence.
Platforms are often hesitant to explain exactly how they enforce their policies to avoid giving bad actors a roadmap. The variety of approaches to labeling and removal also illustrates the tense balance companies are trying to strike between allowing users to express themselves and protecting their platforms from being weaponized, all while facing scrutiny from politicians on both sides of the aisle. .
Policies diverge more when it comes to political ads. Twitter and TikTok have banned ads for candidates and about political issues. Both Google and Facebook allow them, and require disclosure of who pays for them. Facebook refreezes all new political ads in the week before Election Day, but will allow existing ads to continue running.
But define when an announcement or problem qualifies as a politician it’s not straightforward, leaving loopholes that experts fear could be exploited.
“It’s actually quite a confusing picture because there’s no regulation, no standards that these companies have to follow,” Harbath said. “Everyone just makes the decisions that they think are best for them and their company.”
On the other hand, all four platforms are highlighting features that aim to put more reliable information in users’ feeds, such as providing information on candidates, voter registration, and when and where to cast your vote. That information will also be available in Spanish on all platforms.
Extending beyond English is an important step in addressing a “glaring omission” in previous elections, said Zeve Sanderson, executive director of the Center for Social Media and Politics at New York University.
In the closing days of the 2020 election, Latino voters were targeted by social media posts discouraging them from voting, according to voting rights activists and disinformation experts.
Evidence is mixed on how well platform policies work
Even as social media companies double down on their tactics for 2020, the researchers say it’s not always clear how effective their interventions are.
In the case of labels, there is conflicting evidence as to whether they help dispel false impressions or whether, in some cases, they may inadvertently encourage people to double down on those beliefs.
last year, researchers at New York University looked at what happened after Twitter labeled some of Trump’s tweets before and after the 2020 election as misinformation. They found that tagged posts spread further on Twitter and also took off on other platforms, including Facebook, Instagram and Reddit.
Platforms have given little glimpses into what they know about how well their tools work. twitter has said after it redesigned its misleading labels last year, more people clicked to read accurate information.
Meanwhile, Facebook says it will be more picky about what it tags, after users said tags were “overused” in 2020. targeted and strategic,” Nick Clegg, president of global affairs for Facebook’s parent company Meta, wrote in a blog post.
But for NYU’s Sanderson, that raised more questions that the company hasn’t answered.
“What was the feedback? From which users? What do the words ‘targeted’ and ‘strategic’ mean?” he said. “It would be really helpful for them to contextualize it into the actual details of what their internal investigation has found.”
Brent Stirton/Getty Images
Beyond “Whac-A-Mole” Disinformation
Also, it’s hard to know how well companies enforce their policies, which Harbath, the former Facebook official, described as a “huge gap.”
“Companies say, ‘These are our policies, these are all the things we’re going to do.’ But they don’t talk enough about, ‘Okay, but humans are fallible. The technology is not 100% perfect,'” he said.
In the hours after the polls closed in 2020, as Trump supporters began demonstrating online under the banner “Stop the Steal,” Facebook promptly removed the first Stop the Steal group from its platform, under its rules against questioning. the legitimacy of elections and calls for violence But more groups kept popping up, and Facebook couldn’t keep up.
The researchers caution that the 2020 focus on election falsehoods fails to address the reality of 2022. Tech companies treat elections as discrete events, typically rolling out policies and then turning them off when voting is over, though the false claims don’t end. when the ballots are counted.
“Companies should do much more to have an always-on policy, because clearly these election integrity issues will remain in the lexicon and the conversation well beyond Election Day,” Harbath said.
The big challenge is for companies to go beyond being reactive and find ways to prevent their platforms from being used to spread these types of falsehoods so widely in the first place.
“When it comes to disinformation and election misinformation, the platforms are just playing Whac-A-Mole, trying to stay on top of something before something else comes out,” said Spandi Singh, policy analyst at the Open Technology Institute at the think tank. New America tank.
Publisher’s note: Facebook parent Meta pays NPR to license NPR content.