EFF: In the Debate Over Online Speech and Security, Let’s Get to the Science

In the Debate Over Online Speech and Security, Let’s Get to the Science

A debate is raging, in Congress and the media, over whether or not we need new regulations to try to shape how Internet platforms operate. Too often, however, the discussion is based on rhetoric and anecdote, rather than empirical research. The recently introduced National Commission on Online Platforms and Homeland Security Act is intended to change that, and we’re pleased to support its goals.

Comprehensive Research Can Help Ensure Policy Choices are Based on Factual Evidence

When we are faced with a hard societal problem, we are often tempted to look for “obvious” answers. For example, politicians and the media used to regularly blame violent video games for real-world violence, even though scientific research has shown there is no causal link. Evidence notwithstanding, the belief that violent video games cause violence has led to a series of proposals to censor lawful speech, from unconstitutional efforts to ban violent video games in state law (including making it criminal to sell them) to efforts in Congress to mandate that violent video games (again without evidence) be labeled as sources of violent behavior

Right or wrong, today’s politicians and commentators often point to Internet platforms as a cause of various societal problems. Given how crucial those platforms are to our ability to communicate, organize, and access information, we must do better today than with the video game debates of yesterday. While many have concluded that platforms are to blame for some of the worst things that happen in society, that conclusion is not backed by any comprehensive objective science. This legislation seeks to resolve that problem by creating a national commission to conduct hearings with experts, fund critical research, seek information on whether algorithms and automated decision making in platforms have an impact, and ultimately issue a report for the public and policymakers alike. 

Automated Decision Making for Content Moderation Has a Poor Track Record in Protecting Human Rights, But Much of it Remains Shrouded in Secrecy

From our own research on how platforms handle “extremist content” through automated tools, we have been able to determine that content that is documented by human rights defenders has been deleted. For example, YouTube’s automated filters have taken down thousands of Syrian channels that depicted human rights violations. Our joint investigation with Syrian Archive and Witness estimates that at least 206,077 videos, including 381 videos documenting airstrikes targeting hospitals and medical facilities, have been removed from the platform between 2011 and 2019.

While critics push the platforms to remove more content quickly (which they can only do by increasing their reliance automated tools), they neglect to consider the impact that might have on lawful speech, civil liberties, and civil rights. From what we know right now, we are worried that a higher reliance on automation will have dire consequences for freedom of expression and we hope this Commission study can analyze this problem.

Finally, we are heartened to see the inclusion of the Santa Clara Principles in the legislation as a means to assess how platforms moderate content and whether their decision making is “transparent, consistent, and equitable” in their enforcement of “terms of services or codes of conduct” and how they provide users an opportunity for redress. 

As the legislation moves through the process, we believe some work remains, but the underlying goals of the bill at this early stage is worthy of support. The issue of violence in society is a serious one that warrants a serious inquiry and in-depth research—not unfounded assumptions. We have encouraged Congress to move this bill forward and to ensure that third parties can audit the government’s findings, particularly in the area of automated tools. So far, research into these tools has been stymied by a veil of trade secrets and assertions of proprietary information.


Comments

Popular posts from this blog

EFF: No Digital Surveillance of Iranians at the U.S. Border—or Within the U.S.

EFF: Corporate Speech Police Are Not the Answer to Online Hate

Living on the (IT) Edge: Schneider Electric at HPE Discover 2018