If You’re Going to Delete All My Facebook Posts, Then You Might As Well Do It Right

Judicial benchslap stories are juicy legal fodder, and this story was no different. Recently, the legal community eagerly gossiped about a federal judge that lashed out against a well-known New York law firm. The offense? Judge Nicholas Garaufis of the Eastern District of New York was infuriated that the firm had sent a mere junior associate instead of a partner to a hearing on two cases that “implicate[] international terrorism and the murder of innocent people in Israel and other places.” While the judge has since apologized for his remarks and the salacious part of the story is largely over, the parties continue to litigate.[1]

The underlying claim in the pair of lawsuits is that Facebook facilitates terrorism by providing a platform for militant groups to incite attacks. This brings up the question of what role social media networks or even online service providers in general should play in policing users for potentially criminal or violence-inducing conduct. In these cases, Facebook was accused of not doing enough, while in other instances, a company can do too much.

Accordingly then, what kind of responsibilities do social media companies have?

From a purely legal standpoint, the answer is probably none. Section 230 of the Communications Decency Act of 1996 says that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, social media platforms are not liable for the content that their users publish. For the most part, this is a good thing. It allows innovation and promotes free speech. If websites featuring user-generated content were liable for their users’ posts, many would likely self-censor to protect themselves from potential lawsuits. In addition, the task of censoring everything would be nearly impossible given the amount of content uploaded online. For example, more than 100 hours of video were uploaded to YouTube every minute in 2013.

Of course, this does not mean that social media companies give users a free rein. Promoting terrorism or committing crimes are (more likely than not) against companies’ terms of use, and so offending posts are taken down and users removed. This past year, Twitter suspended hundreds of thousands of accounts for posting terrorist content. This demonstrates that, while there may be no legal obligation to monitor users’ content, companies have nonetheless implemented policies and practices that mirror what they feel are their moral obligations.

However, even though companies have proactively undertaken some policing responsibility, the questions regarding how much these companies should be working with law enforcement still remain open. In other words, they have accepted moral duties, so how do they do it well?

Programs like YouTube’s flagging system require that either employees or users themselves monitor posted content. The government intervenes here only when content is reported to them. This one-step removed ensures that the right to privacy is safeguarded. It seems a bit paradoxical to say that these practices protect user privacy even when someone is still monitoring their content, but the Fourth Amendment only protects from unlawful searches by the government, not searches by private entities to whom you have freely given your information or to people whom you allow to view your information.

Still, it is problematic to allow private companies to become the ones that dictate what is and what is not acceptable online behavior. By leaving the choice to decide what gets reported, this leads to uneven governance across the Internet and chips away at the government’s ability to enforce its own laws (especially considering the challenges of Going Dark). Companies may also be overlooking coded language that seems innocuous, allowing criminal activity to continue.

Law enforcement and private companies clearly cannot act completely independently of one another; nor can they work too closely together. It seems as if it is a quite a difficult balance to strike.

Perhaps the best solution is actually one where neither monitors. Instead, something watches for them. Using artificial intelligence in the future, companies can operate programs that look at all users’ activity and then decide if there is reason for law enforcement to take a closer look at a particular user. Criteria for what these programs search for could be jointly set by law enforcement and social media companies, but it is important that the companies be the ones to control these programs to avoid Fourth Amendment concerns. Otherwise, potential litigants would have a strong argument that the government conducts unlawful search and seizures—social media companies would be effectively acting as agents for the government when administering these programs. Interestingly enough, this also renders moot the current struggle to decide exactly how much policing companies should be doing on behalf of law enforcement. Social media platforms would still escape legal liability as they do now, but objectivity in the A.I.-aided process allows them to approach enforcement in a more consistent manner.

[1] Racheli Cohen v. Facebook, Inc., 16-CV-4453 (E.D.N.Y. Am. Compl. filed Oct. 10, 2016); Force et al. v. Facebook Inc., 16-CV-5158 (E.D.N.Y. Am. Compl. filed Oct. 10, 2016)

Comments are closed.