Monday Memo: Will Facebook's fake news tool work? Experts say 'nope'

Social media platforms have made strides in the first three months of the year to develop systems that take aim at fake news and misleading information. They have had plenty of encouragement to that end. Germany's justice minister has threatened social networks with fines for failure to delete or screen what it calls hate content. Given Deutschland's economic prominence (it is the largest consumer market in the European Union), and history of censorship regimes, Facebook has plenty of incentive to heed such a warning.

Facebook is implementing a two-step process in an attempt to appease German politicos. First, Facebook users will be allowed to flag stories in their news feeds as questionable, and in turn the articles will be labeled "disputed." Second, the social network's fact-checking henchmen (or high priests, per The Wall Street Journal) will comb through flagged articles and determine their validity. At the onset, Facebook will use staff at Snopes and The Associated Press to carry out the second phase of this project.

There are some inherent problems with this approach, and digital media experts have been quick to point out many of them. 

Syndicated columnist Leonid Bershidsky points out that Facebook cannot target content producers and claim its status as an impartial platform simultaneously. "If you do censorship in any form, rather than tell authorities they can go after the specific users if they want, you are no longer just a technological platform -- you are a media organization."

Jay McGregor, a tech contributor at Forbes, points out that users will have the ability to overload the system by flagging controversial real news articles, that members of the Trump administration are under federal investigation, for instance, as fake. There's also the issue of whether or not followers of cultish fake news purveyors will care much whether a higher power deems their stories questionable. 

The Wall Street Journal editorial board points out that implementation of selective flagging and censorship brings Facebook into the ever-tumultuous political arena. It is notable that the German political party applying pressure on the social media platform is facing the prospect of election losses to smaller, upstart political movements that rely on social media to spread their message. "Facebook can run its business as it pleases, but this fake fact-checking exercise is likely to damage its brand and open itself to political pressure from every corner, including from Mr. Trump."


I can't pretend to be impartial in this discussion, since I am currently investing in the development of an artificial intelligence driven platform that — among other things — might help readers make better judgments about questionable news content. But I do believe that it will take more than a true-false encyclopedia to make sense of the murky web of information.

For one thing, this approach will take a significant amount of time. When I worked in digital newsrooms, minutes were treated like hours. Miss a scoop on breaking news by 30 minutes, and you might lose half of your potential audience. The speed and ferocity with which information moves across the web makes it difficult for human fact-checkers to keep up in any meaningful way (we already know that editorial corrections and followups are read far less frequently than the erroneous stories they seek to correct).

Another challenge is the tenuous nature of breaking information generally. Just because an article checks out as factually correct doesn't mean it gives readers an accurate impression of what's happening. This phenomenon can be observed in a case I wrote about a few months ago. Three years ago a Time magazine story condensed a quote by Republican House Speaker Paul Ryan, who at the time was speaking about welfare at the Conservative Political Action Conference. The quote was weaponized by partisan news outlets in not one, but two rounds of social media attacks. Time issued a correction and even changed the headline, but that didn't stop the relevant fake story from reappearing in viral news feeds nearly two years later.

I do not believe that the internet, in the hands of a modern Western democracy, will be conducive to practices of censorship. The web is, by nature, an open platform, with unlimited nodes and unlimited opportunities for creation and consumption. Purveyors of misleading content always have and always will find an audience, using the best available tools of their time to do so.

Our solution is qualitatively different in this respect, from what Facebook, Twitter and others have so far endeavored. We're not trying to make choices for readers. We are instead giving them more information, a more powerful tool through which they can access online content.

Like Google and the early search engines before it opened up the internet to a whole new variety of uses, our comprehensive search engine will help unlock even more of the World Wide Web's potential power.

Computers already know how to read. Just five years ago, IBM's Watson computer beat out two of the biggest Jeopardy starts -- not once, but twice! If computers can read and understand text, we can conversely teach them to use a search engine. And if we can teach a computer to use a search engine, we can build a software program that will run searches on every possible aspect of any piece of information we have in front of us, instantly.

In doing so, we seek not to limit readers based on a subjective judgment made by someone else, but to offer them a view into otherwise invisible parts of the web. Is there better information, or is there information that is missing, but available somewhere else that readers just aren't seeing?

Automating this process for news readers also sets the stage for more sophisticated forms of research automation in fields like law, science and medicine. Imagine reading an academic paper, clicking a button, and in seconds having a list of every relevant study, prioritized based on the context and type of research being performed. Imagine a program that can automatically pull excerpts from citations, and use them to create an outline for further research and followup studies.

IDC estimates that the average U.S. office employee spends 30 percent of their workweek searching for information on the internet. A better, more comprehensive search engine could cut that time in half.

Purveyors of misleading content have always found an audience, no matter the medium. The way forward is a technology that gives users access to more information, not less. Efforts to restrict access content will only turn users to more open, accessible sources.

John Harper is Founder and CEO of Grapple Media Software. If you like our idea, check out our website, and share our concept with those you think might be interested. Watch our demonstration on YouTube, and watch out for fake news.