Plug-ins, Extensions and Other Solutions Suggested to Combat Fake News on Facebook
SAN FRANCISCO—From a co-working space in the Brooklyn neighbourhood of Dumbo, Eli Pariser was reading a post-election article on the “insolvable” problem of fake news when the idea hit him.
Why not create a rolling document, collecting ideas from the best and brightest minds in corporate America, academia and journalism to curtail the scourge of the misinformation ecosystem? Pariser, CEO of Upworthy, a viral news site that leans toward liberal social causes, and author of The Filter Bubble, grabbed his laptop and started a Google doc called “Design Solutions for Fake News,” that has swelled to 135 pages.
“It’s similar to the spam problem in the 1990s. We fixed that and we can fix this,” says Sunil Paul, an Internet entrepreneur who founded companies such as Brightmail and Freeloader. Paul created a group, The Truth Project, to eradicate the problem. His solution? A mix of technology, social incentives and laws.
Pariser and Paul are part of a growing army of individuals, citizen groups and app developers coming up with solutions to erase fake news on Facebook and Google, devising Chrome extensions, creating apps and debating bitcoin technology.
Many have entered the breach because larger companies have been reluctant to act, they claim, out of fear of angering conservative groups.
Fake news has fooled left-leaning consumers too, and ranges from an outright mimic (Abcnews.com.co) to highly misleading, to satire-gone-wrong.
Fake news is “not new by any stretch,” says Claire Wardle, research director of First Draft News, a coalition of nine non-profits that focus on user-generated content. But it has gained traction as never before, she says, because individual tech entrepreneurs in far-flung locales like Macedonia, as well as the U.S. suburbs, saw opportunity to make money while tapping into anger during a contentious election.
An imperfect storm of factors have hyper-accelerated the rampant growth of fake news, says John Johnson, co-author of Everydata: The Misinformation Hidden in the Little Data You Consume Every Day.
Trust in the media is at an all-time low — 32 per cent, according to a Gallup poll — amid an avalanche of news to a disenfranchised electorate, many of whom are seeking information, no matter how specious, to affirm preconceived notions, Johnson says.
The stop-fake-news efforts start with Facebook, criticized for being a primary conduit of patently false stories.
On Thursday, Facebook announced it was adding options for readers and third-party fact checkers to flag articles.
There are 96 fact-checking projects in 37 countries, according to Duke Reporter’s Lab.
A spectrum of solutions, all in the early stages, could make a dent though they’ll require coordination, time and money.
“There are uncomplicated ways to reduce the visibility of fake news,” says Pariser, who favours the use of metadata to denote accurate information from trusted sources. He’s also an advocate of pairing stories with different viewpoints on the same topic to give readers a deeper perspective.
Since he launched his Google doc in late November, several concepts have floated to the top. Some have suggested a Better Business Bureau-like ratings agency funded by content producers, others a Wikipedia-style database of information about news organizations.
One intriguing fact-checking suggestion, blockchain technology, might leverage an identity service such as Thomson Reuters’ BlockOneID to assure anonymity and reliability by fact checkers, who would be paid in cryptocurrency.
Paul’s idea to fight fake news, which could take the form of a non-profit project, is likely to include former colleagues from Brightmail, a company that developed anti-spam filters. “This is a big deal, and I have the knowledge and wherewithal to do it,” says Paul, who outlined his project in a Medium post.
The three-pronged approach includes developing technology to tag content with a truthfulness score; social incentives to reward legitimate aggregators and news publications through something called a carrotmob, described as the inverse of a boycott; and a push for laws that extend libel to maliciously not retracting false stories.
As with spam, Paul says, fixing the “Internet media problem will take a similar full assault of new technology, social incentives and laws.”
Mike Caulfield, a social media adviser to faculty at Washington State University, has started a project, the Digital Polarization Initiative, with the American Association of State Colleges and Universities to help college students identify fake news and understand how their Facebook news is skewing their view.
In the interim, a handful of consumer products have surfaced.
Slate unfurled This Is Fake, which combines crowdsourcing and editorial curation to identify articles in Facebook feeds that spread misinformation and flag them as false.
Browser plug-in B.S. Detector — available for Chrome, Firefox, Opera and Safari — cross-references news links on Facebook with a database of sites flagged as fake. A red light warns the reader, though the extension does not block misleading sites.
An open source project, the FiB Chrome Extension, combs through a user’s Facebook news feed to verify status updates, images and links through image recognition, keyword extraction, source verification and a Twitter search. An artificial intelligence assessment of facts results in a verdict tagged as “Verified” or “Not Verified.” If the story is deemed false, the AI will search for a verified source on the same topic.
The extension also can check the veracity of information about to be posted, using the same verification process. A chatbot would alert the consumer.
Another movement attempts to galvanize machine learning specialists to create training to pick up patterns of fake news headlines.
Despite the recent swarm of possible solutions, ridding the Internet of fake news will take time, Paul and others warn.
“It took so many years of work to get spam to a manageable state,” Paul says.
For now, the biggest challenge is stitching together so many ideas in a unified effort to avoid duplication of effort or — worse — conflicting technologies.
“Coordination is important so we don’t walk on each others’ toes,” First Draft News’ Wardle says.
And then there is the political calculus: Some believe Facebook may be overly cautious in disposing fake news out of fear it will spark a conservative backlash.
“Facebook can do it, but it might be more concerned about reaction from the right, which pushed fake news,” says Travis Katz, CEO of travel website Trip.com. “For Facebook, it is a political issue.”
Facebook declined to comment.
Silicon Valley experts argue the grassroots effort underscores the importance of empowering the public to be the ultimate arbiter on what constitutes real vs. fake news.
“It is appropriate it comes from a variety of sources rather than Facebook, Google and Twitter,” says Sally Lehrman, director of The Trust Project at Santa Clara University’s Markkula Center for Applied Ethics.