When Russia tried to influence the 2016 US presidential election, it focused the attention of officials and the public in America on foreign disinformation campaigns.
During the investigations carried out by the US government, it was determined that Russian hackers broke into the emails of the National Committee of the Democratic Party and the head of the campaign of the then presidential candidate, Hillary Clinton, and published internal documents. Russia's "troll factory" has opened thousands of social media accounts to influence political discourse in America.
The Russian government tried to undermine Clinton's candidacy and improve the position of her opponent Donald Trump, the US intelligence community concluded.
Words like "disinformation" have become commonplace in American politics. Companies that own social networks have been asked to take action against lies on the Internet.
Eight years later, as Americans prepare to elect a new president in November, some experts believe America is in no better position to prevent the spread of disinformation than it was in 2016.
According to their opinion, the problem has become bigger, politics has interfered in discussions about disinformation, and social networks regulate the content less and less.
"The fear is that we could go back to a situation like it was in 2016 where we could have real interference by trolls, foreign interference in these platforms. And a large number of voters are informed on these platforms," warned Shilpa Kanan, who was once in charge of content moderation on Twitter (now Iks).
However, given the number of pressing issues in America, not everyone agrees that misinformation is a real threat to the country.
Modified Terms
At the start of this year's election cycle, Americans are deeply divided. According to polls, almost a third believe the false claims that the 2020 presidential election was rigged.
Former President Donald Trump - who was defeated in that election and is the favorite to win the Republican presidential nomination - is actively promoting that narrative.
On January 6, 2021, his supporters stormed the US Capitol in an attempt to stop the formal confirmation of the election results, an event that continues to divide the American public.
Conspiracy theories, lies and aggressive rhetoric have become commonplace in American politics, says Sara Aniano of the Anti-Defamation League.
"The politically acceptable options have changed since 2016. Since then, we've witnessed the normalization of some really heinous rhetoric and false narratives," points out Aniano, who researches disinformation.
Abroad, the US faces rivals who may want to influence the outcome of the election.
The White House is apparently aware of that danger. President Joe Biden raised the issue in a conversation with Chinese leader Xi Jinping in November and received assurances that Beijing would not interfere in the November election, CNN reported.
The situation is changing both in cyberspace and in technology. The US Cyberspace and Infrastructure Security Agency has expressed concern that the rapid development of artificial intelligence (AI) could "amplify existing risks to election infrastructure".
Social networks, meanwhile, are abandoning efforts to regulate information shared on the platforms, according to data from the non-profit media organization Sloboda Media.
This has "created a toxic online environment that is vulnerable to abuse by anti-democratic forces, white supremacists and other bad actors," the organization said in a report in December.
Although this change in the regulation of content is sometimes ideological - the owner of Iks, Elon Musk, publicly advocated for less restrictions on freedom of speech - more often it is about economic reasons.
Technology companies have laid off more than 200.000 employees since last year, including those in charge of content moderation.
The role of politics
The threat of the spread of disinformation is becoming more and more a political issue.
In 2022, the Biden administration formed the Committee on Disinformation, within the Homeland Security Secretariat, with the task of coordinating the secretariat's efforts to counter misleading or false information on the Internet.
The formation of the obscure committee caused great controversy. Leading Republicans on the House Homeland Security and Intelligence Committees have written to the government asking for more information on the formation of the new entity. The Republican senator from Utah, Mitt Romney, appreciated that the formation of the committee "sends a message to the world that we are going to spread propaganda in our country."
Sen. Josh Hawley, Republican of Missouri, tweeted that "the Department of Homeland Security's top priority is to regulate Americans' free speech."
Nina Jankovic, an expert on disinformation, who was supposed to head the board, denied that the goal was to censor Americans. In light of the controversy, she faced harassment and even death threats.
After only a few weeks, she resigned, and the board was quickly disbanded.
The issue of disinformation has remained controversial ever since. In February 2023, the House Judiciary Committee requested documents from five leading technology companies as part of an investigation into whether there was a conspiracy with the Biden administration to censor conservative views.
It's not just officials who are concerned about how the government and social media are handling misinformation. Jankovic believes that the interference of politics in efforts to prevent the spread of misinformation makes it harder for tech companies to regulate fake and toxic content.
"They are left with no incentive to do the right thing," Jankovic points out.
However, Aaron Terr, director of the Foundation for Individual Rights and Speech (FIRE), a nonpartisan civil liberties organization, disagrees with that assessment.
He believes that if the government is given powers to fight disinformation, then politicians can label certain ideas they disagree with as lies and try to remove them from public discourse. That would then discourage public discussion, says Ter for Voice of America.
"Whatever fears there are about disinformation, there is no reason to believe that it justifies the abolition of basic civil liberties," Ter points out.
What do social networks say?
The First Amendment to the US Constitution not only guarantees Americans the right to free speech, but also gives social networks the right to make their own editorial decisions.
The companies say they are doing just that, and reject accusations that they have given up on the fight against misinformation.
Voice of America sought comment from four major companies that own social networks: Meta (Facebook, Instagram), YouTube, TikTok and Iks. With the exception of Iksa, which usually does not respond to media requests for comment, all responded and outlined their strategies for combating false information.
"We put a lot of effort into measures and systems that connect people to high-quality election content. Content that misleads voters about how to vote or encourages interference with the democratic process is prohibited. We strictly enforce our guidelines, which apply to all types of content, including and optional," YouTube spokeswoman Ivy Choi said in response.
"Our resolve to support the 2024 elections is firm, and our election teams are on alert," Choi said.
Meta referred VOA to a document outlining efforts to combat election disinformation.
"We have around 40.000 people working on safety and security, with more than $20 billion invested in teams and technology in that area since 2016. Much of our approach has been consistent for some time, but we continue to adapt to ensure that we can face new challenges, including artificial intelligence," the document states.
The company also pointed to a policy on the transparency of political ads. Meta will block new political ads during the final week of the presidential campaign. It also requires advertisers to disclose when they use artificial intelligence to alter photos, video or audio in a political ad.
While X did not respond to VOA's request for comment, the company previously said it was changing its election policy to prevent the dissemination of content "that could intimidate or mislead people into surrendering their right to participate in the civil process."
However, Iks states that it will not necessarily remove content that violates the company's rules, but that it will limit its visibility and flag it.
Of all the platforms, TikTok is the biggest unknown. The application was only available in the US after the 2016 election. Since then, her popularity has grown rapidly, especially among young people. The US government is considering banning a Chinese app due to possible threats to national security.
Unlike other platforms, TikTok prohibits political ads. The spokesperson told Voice of America that the company does not allow misinformation about the civic and electoral process and removes it from the platform. TikTok also has employees checking content for accuracy and directing voters to verified election information. It also bans posts about politicians that are the product of artificial intelligence and requires AI-generated content to be flagged.
However, Kanan - a former Twitter employee - fears that is not enough, especially in the crucial year of 2024 when billions of people vote.
Bonus video: