Internal documents obtained by the Guardian show that Facebook's fact-checking system uses both artificial intelligence (AI) and human fact-checkers to flag articles that may be subject to fact checks. In some cases, articles from popular websites like the Post may be manually referred to fact-checkers "with or without temporary demotion."
Facebook fact-checkers manually added the New York Post article to its queue, and reduced distribution of the article for a short period while reviewing its contents, the Guardian reported, citing the documents.
"We can do this on escalation and based on whether the content is eligible for fact-checking, related to an issue of importance, and has an external signal of falsity," the documents read, according to the outlet.
A Facebook spokesperson told Fox News that Facebook has "been on heightened alert because of FBI intelligence about the potential for hack-and-leak operations meant to spread misinformation."
"Based on that risk, and in line with our existing policies and procedures, we made the decision to temporarily limit the content’s distribution while our fact-checkers had a chance to review it," the spokesperson said.
When fact-checkers did not rate the story, Facebook "lifted the demotion."
The Post on Oct. 14 published emails recovered from a laptop purportedly belonging to 2020 Democratic nominee Joe Biden's son, Hunter. Trump attorney Rudy Giuliani shared the contents of the laptop with the outlet.
Facebook Policy Communications Director Andy Stone tweeted on Oct. 14 that the Post article was "eligible to be fact-checked by Facebook's third-party fact-checking partners" and that Facebook was "reducing its distribution" in the meantime.
Documents also show that Facebook keeps a list of "content in the top 5,000 most popular internet sites" called "Alexa 5K," which are less subject to censorship or distribution limitations "under the assumption these are unlikely to be spreading misinformation," according to The Guardian.
Additionally, Facebook has a set of emergency policies related to the U.S. election called "break-glass measures." Those mandate that Facebook blocks certain viral, misleading claims without first consulting third-party fact-checkers, similar to its misinformation policies surrounding claims about COVID-19.
"Generally, we take action on misinformation rated by fact-checking partners by reducing its spread and surfacing more information to people," Facebook's fact-check policies state. "However, we will remove misinformation if it violates our Community Standards, including misinformation and unverified rumors that could contribute to the risk of imminent violence or physical harm, voter and census interference content, and certain manipulated videos..."
Facebook has implemented a number of policies since the 2016 election to help combat foreign interference and misinformation. The company also implemented new measures in October ahead of the 2020 election to "stop abuse and election interference on our platform."
New measures include a temporary block on political ads, a voting information center, automatic labels on posts about voting and the U.S. election, additional security for official candidate accounts and additional page transparency.
Facebook removed 50 networks of coordinated inauthentic behavior worldwide this past year and helped more than 4 million Americans register to vote.
The tech giant faces pressure from Congress and other politicians to simultaneously censor misinformation and hate that could lead to violence, while reducing censorship on American news media and politicians, which Republicans argue targets conservative views and users unfairly.