The term "fake news" has become an epithet used by US President Donald Trump to describe any unpleasant article. But it is also an analytical term that describes deliberate misinformation presented in the form of an ordinary news report.
The problem is not entirely new. Harper's Magazine published an article in 1925 on the protection against "fake news". Today, however, two-thirds of American adults get their information from social networks that rely on a business model that is influenced by outside manipulation and in which algorithms can easily circumvent the rules to achieve either profit or some nefarious goal.
Regardless of whether they are amateur, criminal or government, many organizations - both domestic and foreign - have very successful reverse engineering for technical platforms to analyze information. To Russia's credit, its government was one of the first to figure out how to weaponize social media and use American companies against itself.
Overwhelmed by the vast amount of information available online, people find it difficult to know where to direct their attention. Attention, not information, that's the little thing to get. Big data and artificial intelligence enable targeted information on a micro level so that the information that people will receive is limited by the "filter bubble" of similar users.
The "free" services that are offered on social networks are based on a profit model in which the information and attention of users are actually products that are sold to advertisers. Algorithms are designed to find out what keeps the user's attention so that it would be possible to offer him more advertisements and thus have more profit.
Practice shows that emotions such as dissatisfaction stimulate participation, and news containing false scandals attract more users than credible information. According to one study, such a lie on Twitter was 70% more likely to be viewed than information that was true.
Likewise, a study of demonstrations in Germany earlier this year found that YouTube's algorithm systematically steered users toward extremist content, because that was where the most clicks and, consequently, profits were generated. Fact-checking by mainstream media often doesn't keep up, and sometimes it can even be counterproductive because it focuses more attention on the lie.
By its nature, the profit model of social networks can become a weapon of both state and non-state actors. Recently, Facebook has been the target of serious criticism regarding an unprecedented case related to the use of private information of its users.
CEO Mark Zuckerberg admitted in 2016 that Facebook "wasn't ready for the coordinated information operations that we regularly encounter." However, the company "has learned a lot since then and has introduced sophisticated systems that combine technology and people to prevent election interference on our service."
These efforts include automated programs to find and remove fraudulent accounts; verifying the citizenship of those who post political advertisements; hiring 10.000 additional people to work in the security sphere and improving coordination with judicial authorities and other companies to eliminate suspicious activity. But the problem is not solved.
The arms race will continue between social media and state and non-state actors investing in ways to use their systems. Technological solutions, such as artificial intelligence, are not a magic wand. Because they are often sensational and disturbing, fake news spreads deeper and faster than true information.
False information on Twitter can be opened by incomparably more people and much faster than credible information, and its repetition, even in the context of fact-checking, can increase the likelihood that the reader will accept it as true.
During the run-up to the 2016 US presidential election, the Internet Research Agency in St. Petersburg, Russia, spent more than a year setting up dozens of social media accounts disguised as local US news agencies. Sometimes the reports favored the candidate, but often they were such that they only created the impression of chaos and dissatisfaction with democracy and thus reduced voter turnout.
When Congress passed the Communications Decency Act in 1996, then-young social media outlets were seen as neutral providers of telecommunications services that enable interaction between customers. Under political pressure, large companies have begun to more carefully protect their networks from obvious counterfeits, including those distributed via botnets.
But imposing restrictions on free speech, protected by the First Amendment to the US Constitution, leads to complex practical problems. And while the First Amendment does not apply to non-American machines and actors (and private companies do not fall under it in any way), it does not apply to monstrous national groups and individuals who can serve as intermediaries for the influence of foreign entities.
In any case, the damage done by outside players may be less than what we do to ourselves. The problem of fake news and today's distortion of real sources is difficult to solve because they presuppose compromises between our accepted values. Social media companies that fear being called out for censorship want to avoid regulation by lawmakers who criticize them for both what they do and what they don't do.
The experience of the European elections suggests that journalistic research in warning the public in advance can help voters turn against the disinformation campaign. But the battle against fake news is likely to remain a cat-and-mouse game between those who post it and the companies whose platforms they use. It will be part of the background noise in elections all over the world. Constant vigilance will be the price of protecting our democracies.
The author is a professor at Harvard University Copyright: Project Syndicate, 2018.
Bonus video:
