NEW! Canadian Military History Trivia Challenge
Search

Canadian Military History Trivia Challenge

Take the quiz and Win a Trivia Challenge prize pack!

Canadian Military History Trivia Challenge

Take the quiz and Win a Trivia Challenge prize pack!

Rising social media censorship hampers war-crimes investigations

An accelerating trend among social media platforms to take down online content they deem too violent or graphic is hampering war-crimes investigations and other important probes, says a top human-rights organization.

Human Rights Watch says the material, often removed at the behest of governments, is not being archived in ways that are accessible to investigators and researchers looking to hold criminals to account.

“Social media content, particularly photographs and videos, posted by perpetrators, victims, and witnesses to abuses, as well as others has become increasingly central to some prosecutions of war crimes and other international crimes,” the group said in a report entitled “Video Unavailable”—Social Media Platforms Remove Evidence of War Crimes.

It says such content also helps media and civil society document atrocities and other abuses, such as chemical weapons attacks in Syria, a security force crackdown in Sudan and police abuse in the United States.

“Yet social media companies have ramped up efforts to permanently remove posts from their platforms that they consider violate their rules, or community guidelines or standards according to their terms of service, including content they consider to be ‘terrorist and violent extremist content,’ hate speech, organized hate, hateful conduct, and violent threats.”

“Social media companies are increasingly using algorithms to identify and remove content so quickly that no user sees it.”

The problem is exacerbated by the fact that social media companies are increasingly using algorithms to identify and remove content so quickly that no user sees it before it is taken down.

Some platforms have filters to prevent material identified as terrorist and violent extremist content and related material from being uploaded in the first place, says the report, which is which is based on open-source material and multiple interviews with lawyers, legislators, police, investigators and child-protection workers.

Between January and March 2020, Facebook took down 6.3 million pieces of “terrorist propaganda,” 25.5 million pieces of “graphic violence,” 9.6 million pieces of “hate speech,” 4.7 million pieces of “organized hate” and disabled 1.7 billion “fake accounts.” The overwhelming majority was automatically flagged before users reported it, said the report.

Users appealed takedowns for 180,100 pieces of “terrorist propaganda,” 479,700 pieces of “graphic violence,” 1.3 million pieces of “hate speech” and 232,900 pieces of “organized hate.”

On appeal, Facebook restored access to 22,900 pieces of “terrorist propaganda,” 119,500 pieces of “graphic violence,” 63,600 pieces of “hate speech” and 57,300 pieces of “organized hate.” It restored content that had been taken down without any appeal in 199,200 cases involving “terrorist propaganda,” 1,500 cases involving “graphic violence,” 1,100 cases involving “hate speech” and 11,300 pieces involving “organized hate.”

Between January and June 2019, 5.2 million Twitter accounts were reported for “hateful conduct” and more than two million Twitter accounts were reported for “violent threats.” Twitter subsequently “actioned” 584,429 accounts for “hateful conduct” and 56,219 accounts for “violent threats.”

Between January and March 2020, Facebook took down 6.3 million pieces of “terrorist propaganda.”

Governments have encouraged the trend toward automated policing of social media posts, calling on companies to take down content as quickly as possible, particularly since a gunman livestreamed his March 2019 attack on two mosques in Christchurch, New Zealand, in which he killed 51 people and injured 49 others.

“Companies are right to promptly remove content that could incite violence, otherwise harm individuals, or jeopardize national security or public order,” says the report. “But the social media companies have failed to set up mechanisms to ensure that the content they take down is preserved, archived, and made available to international criminal investigators.”

In most places, it notes, local and national law enforcement officials can use warrants, subpoenas or court orders to compel companies to hand over content. But international investigators’ abilities to do so are limited.

They’re also more likely to miss important information and evidence because the increasingly sophisticated artificial intelligence systems are taking down material before they have a chance to see it or even know that it exists. “There is no way of knowing how much potential evidence of serious crimes is disappearing without anyone’s knowledge.”

The group says journalists and independent civil society organizations have played vital roles in documenting atrocities or violations in Iraq, Myanmar, Syria, Yemen, Sudan, the United States, and elsewhere. Some have triggered judicial proceedings, even though they also don’t have access to removed content.

“Companies are right to promptly remove content that could incite violence.”

Companies have reconsidered some takedowns and reposted material after access requests; in other cases, they have to do so, contending it is illegal for them to do otherwise.

The report says it’s unclear how long social media companies preserve content they’ve removed from their platforms before deleting it from their servers—or even whether the content is, in fact, ever deleted from their servers.

Facebook states that, upon receipt of a valid request, it will preserve content for 90 days following its removal, “pending our receipt of [a] formal legal process.”

The report said, however, that Human Rights Watch is aware of instances in which Facebook has retained banned content for much longer periods.

The platform told Human Rights Watch in an e-mail that legislation limits the time it can retain removed content.

“This time limit varies depending on the abuse type…. Retention of this data for any additional period can be requested via a law enforcement preservation request,” it said.

Twitter told the organization it retains “different types of information for different lengths of time, and in accordance with our Terms of Service and Privacy Policy.”

Human Rights Watch said it is aware of an instance in which YouTube restored content two years after it had taken it down.

“Holding individuals accountable for serious crimes may help deter future violations and promote respect for the rule of law,” said the report, released in September 2020.

“Criminal justice also assists in restoring dignity to victims by acknowledging their suffering and helping to create a historical record that protects against revisionism by those who will seek to deny that atrocities occurred.”

Criminal investigations sometimes begin years after the alleged abuses were committed.

It said victims of serious crimes often face uphill battles in their quest for accountability, especially during ongoing conflict. Criminal investigations sometimes begin years after the alleged abuses were committed.

“It is likely that by the time these investigations occur, social media content with evidentiary value will have been taken down long before, making the proper preservation of this content, in line with standards that would be accepted in court, all the more important.”

Countries are bound by international law to prosecute genocide, crimes against humanity and war crimes, noted the report.

“Human Rights Watch urges all stakeholders, including social media platforms, to engage in a consultation to develop a mechanism to preserve potential evidence of serious crimes and ensure it is available to support national and international investigations, as well as documentation efforts by civil society organizations, journalists, and academics,” it said.

“In parallel with these efforts, social media platforms should be more transparent about their existing takedown mechanisms, including through the increased use of algorithms, and work to ensure that they are not overly broad or biased and provide meaningful opportunities to appeal content takedowns.”

Read more at https://www.hrw.org/report/2020/09/10/video-unavailable/social-media-platforms-remove-evidence-war-crimes.


Advertisement


Sign up today for a FREE download of Canada’s War Stories

Free e-book

An informative primer on Canada’s crucial role in the Normandy landing, June 6, 1944.