Nearly three years after a Russian propaganda group infiltrated Facebook and other tech platforms in hopes of seeding chaos in the 2016 US election, Facebook has more fully detailed its plan to protect elections around the world.
In a call with reporters Thursday, Facebook executives elaborated on their use of human moderators, third-party fact checkers, and automation to catch fake accounts, foreign interference, fake news, and to increase transparency in political ads. The company has made some concrete strides, and has promised to double its safety and security team to 20,000 people this year. And yet, as midterm races heat up in states across America, and elections overseas come and go, many of these well-meaning tools remain a work in progress.
“None of us can turn back the clock, but we are all responsible for making sure the same kind of attack on our democracy does not happen again,” Guy Rosen, Facebook’s vice president of product management said on the call. “And we are taking our role in that effort very, very seriously.”
Facebook provided some new details about previously announced strategies to counter election meddling. The company announced, for instance, that its long promised advertisement transparency tool, which will allow people to see the ads that any given Facebook page has purchased, will be available globally this summer. In addition to that public portal, Facebook will require anyone seeking to place political ads in the United States to first provide a copy of their government-issued ID and a mailing address. Facebook will then mail the would-be advertiser a special access code at that address, and require the advertiser to disclose what candidate or organization they’re advertising on behalf of. Once the ads are live, they’ll include a “paid for by” label, similar to the disclosures on televised political ads.
While this process may prevent people from purchasing phony ads that are explicitly about an election, however, it doesn’t apply to issue-based ads. That leaves open a huge loophole for bad actors, including the Russian propagandists whose ads often focused on stoking tensions around issues like police brutality or immigration, rather than promoting candidates. This process is also currently exclusive to the United States.
“We recognize this is a place to start and will work with outside experts to make it better,” Rob Leathern, Facebook’s product management director said on the call. “We also look forward to bringing unprecedented advertising transparency to other countries and other political races.”
The executives also detailed their approach to spotting fake accounts and false news before their influence spreads. One strategy involves partnering with third-party organizations that can vet suspicious news stories. Facebook has already announced a partnership with the Associated Press in the United States. When stories are flagged as potentially false, either by Facebook users or the company’s own technology, they’re sent to the fact-checkers. When the story is deemed to be false, Facebook lowers its likelihood of appearing in people’s News Feeds; Facebook product manager Tessa Lyons says a “false” rating reduces a story’s News Feed distribution by 80 percent.
Critically, this process applies to photos and videos, not just text. The company has also begun notifying people who have shared the stories that the contents are suspect. Those who continue to see the story in their feeds will also see related articles that fact check the piece. Facebook currently has these fact-checking partnerships in six countries, with plans to expand.
This is a long way from Facebook executives’ past claims that they should not be the “arbiters of truth,” a common refrain among tech giants. But as international regulators bear down on Facebook to acknowledge its past mistakes and prevent them in the future, the company is reluctantly taking more responsibility for monitoring the information on its platform—if only to ward off government intervention.
There’s some evidence it’s working. Facebook is now on the lookout for foreign meddling in elections around the world, in part by automatically looking at the country of origin creating a given Facebook page, and analyzing whether that page is spreading “inauthentic civic content.” Those pages get manually reviewed by Facebook’s security team. The strategy has already proven effective; Facebook discovered during last year’s special election in Alabama that Macedonian hoaxers were setting up pages to disseminate fake news, a practice that country became known for during the 2016 election.
“We’ve since used this in many places around the world, such as in the Italian election, and we’ll deploy it moving forward for elections around the globe, including the US midterms,” said Samidh Chakrabarti, a Facebook product manager.
These approaches are promising, but far from comprehensive. They also don’t address the simultaneous scandal engulfing Facebook right now: The company has historically done little to prevent its users’ data from falling into the wrong hands. That valuable information can be used to target people in ways that Facebook has no control over.
Perhaps the most worrisome part of Facebook’s plan to defend democracy, though, is that it has yet to be battle tested. If it fails, we may not know until it’s too late.