Categories
Episodes

Episode 10: Pornhub and Section 230

In this episode, I share some thoughts about a recent lawsuit involving Pornhub. I also mention Section 230 protections, since there have been recent calls to roll back this provision of the Communications Decency Act.

In this episode, I share some thoughts about a recent lawsuit involving Pornhub. I also mention Section 230 protections because there have been recent calls to roll back this provision of the Communications Decency Act.

(Transcript)

Well I’m back from a little break. Wow, a lot of stuff happened while I was busy with my final exams. We had an attempted coup, a violent attack on the Capitol Building—the word “unprecedented” hardly seems to do it justice. But enough about politics.

(As if everything else I’m about to say isn’t about politics too. I think it’s funny how some people try to keep politics out of daily life, because politics is divisive, or not as important as a polite family dinner, or whatever. But at some point what happens in the public sphere matters. At some point it becomes more than just a theoretical argument about trickle-down economics and social safety nets. At some point people could get hurt.)

Nicholas Kristof

There was a recent lawsuit against Pornhub. Several victims of sexual abuse or assault (or just victims of having private videos made public) sued the company for promoting harmful content related to their victimization.

It is a troubling circumstance when a company is in the business of transmitting and monetizing videos that depict and glorify violence or illegal acts. And that’s exactly what this lawsuit was about. The suit sought to stop this company from peddling harmful or illegal content.

Nicholas Kristof wrote an opinion piece for the New York Times. Kristof is a smart guy with a lot of great ideas. A lot of the time I like what he has to say. But this time I have to disagree with him.

Let me give you a brief summary. His article talks about some of the victims of abuse that were involved in this Pornhub problem. They are gut-wrenching stories, with varying levels of harm. One victim is dead now, he says, presumably because of the abuse they experienced in relation to videos that were posted on Pornhub. Other individuals were true victims of child sex trafficking—that is, they were kidnapped and raped for the purposes of financial gain. Kristof summarizes the problem by saying that Pornhub “is infested with rape videos.” Obviously Kristof wants to do something about this problem. I mean who doesn’t? So he proposed a solution that Pornhub should require documentation of models age and consent so that they can verify everything is legal and not abusive.

This seems to be pretty straightforward. And Kristof seems to be asking for points that we can compromise in this way (i.e. not shut down Pornhub entirely). Verifying models’ age and consent seems like the very least that the company could do to to make sure that their practices are not promoting abuse.

Now, I think it was part of the settlement, but Pornhub actually did pretty much what Kristof was suggesting. To be specific, they already had a way for videos to be verified. So they just started enforcing it. The staggering thing about this is the extent to which the enforcement went. They say they removed 13 million videos because they were not verified. That left 4 million verified videos on the site.

Now, wait, hold up, stop the presses. Those 13 million videos were not abusive. They were not illegal. What they were was not verified. From what people are saying, there were actually a total of 118 videos containing “child sexually abusive material,” that is, videos with illegal content.

The execs at Pornhub agree that this is 118 instances too many, and it’s completely appropriate to do something about this. But this is where I have to stop and wonder, if we’re talking about silencing 13 million presumably legitimate voices to stop 118 instances of bad behavior, what are the consequences?

Chilling Effects on Speech

Think about these 13 million videos, the ones that were not abusive or illegal or involving children or whatever. The ones that were fine, except that they were not verified. Why were they not verified?

The obvious answer is that the person who uploaded it was up to no good. That’s possible. But it’s not the only possibility.

Another possibility is that the person in the video wanted to share, but they didn’t want to be identified. We all know that sexuality is already taboo, it is already shameful. So sharing a video of yourself doing sexual things can get you in trouble, even if it’s totally legal. Imagine the schoolteacher who was fired because of her past career as an adult entertainer, or the business exec who was outed as a pervert when it was discovered that he was into some kinky stuff. Or Anthony Weiner.

That’s a shame, because sharing a video of yourself is also a great way to reduce the shame against sexuality that is perfectly normal, healthy, and worthy of celebration.

Another reason that people might not want to reveal their identities is that they’re technically on the wrong side of the law but really just trying to get by. Think of the illegal immigrant who works as a stripper to pay the bills, or the “escort” who is paying their way through college by being paid for sex. These cases are not totally legal, and so maybe they shouldn’t be encouraged, but they have absolutely nothing to do with child sexual abuse or sex trafficking.

So there’s clearly a chilling effect on speech, as the lawyers like to say. This is speech that would otherwise be protected by the First Amendment, but is being silenced because of this wide policy that blocked 13 million videos. The videos that are left are likely ones from larger professional producers and organized operations.

Ironically, those larger businesses are more likely to have their own issues with abuse, like coercive labor practices, low wages, poor working conditions, and exploitation of workers in the industry.

So this policy tends to silence the individuals in favor of the big producers. Instead of seeing real sex, Pornhub consumers are more likely to see fantasy, staged, and artificial depictions. And even if the policy stops child exploitation, it may actually make adult exploitation worse.

Once again, the result is that we as a society are left unable to be open and honest about sex. When young folk have no legitimate sources of information about their developing sexuality, and they turn to Pornhub for answers, they’ll only find fiction and staged depictions by actors. We’ve silenced the examples of real people doing real stuff.

The Real Problem

If, after that, you’re still thinking that it’s worth it to protect those 118 kids… well, surely they deserve protection. But keep in mind that compared to the 118 troubling videos on Pornhub, Facebook reported having 84 million instances on their platform during the same time. 84 million. And what are we doing for those kids? Where’s the lawsuit against Facebook?

What is actually going on here is clearly not a crackdown on child pornography or a drastic measure to protect the children. In fact, it is another step in an ongoing effort to stamp out pornography in general. It was the same story with Backpage, with CraigsList, and on and on. These business are targeted because they promote sex, not because they are encouraging the abuse of children. Powerful forces and vested interest want to control sex for the sake of controlling sex. They say they are doing it to protect children, but the methods say otherwise. Time and again abuse is linked to pornography, and sex work is linked to sex trafficking. So then any effort to limit pornography and sex work is framed in the language of child sexual abuse and sex trafficking. The notion that these terms are synonymous is another weapon in the war on sex, the push to silence us from talking about this significant part of our lives.

Section 230

In a separate but not unrelated issue is a recent call for the rollback of Section 230 protections for tech companies. This legal provision essentially says that an Internet company cannot be held responsible for content produced by someone else, even if it goes through their systems. Facebook, Reddit, and others have leaned heavily on this protection, claiming that they are not responsible for the things their users post.

Let’s look at exactly what this law says:

No provider or user of an interactive computer service shall be treated as the publisher . . . of any information provided by another information content provider.

47 U.S. Code § 230(c)(1)

It goes on to be precise that the service provider, or ISP, is not liable when they allow someone to post something that violates a law or a copyright, or whatever. And it also says that they are not liable if they take action to remove content that someone else posts. So there’s protection on both sides. To use Twitter as an example, they are arguably not responsible if one of their users posts a tweet that calls for violence against a sitting Vice President. And, they are not responsible if they choose to block that tweet either.

The reason for this protection is, in the first case, to encourage free speech. If the ISPs were open to liability, they might be inclined to severely limit what can be posted. (Imagine if Twitter, overnight, deleted every tweet and every account that doesn’t have a blue checkmark next to it, like what Pornhub did to avoid liability.)

In the second case, the law hoped that, even though they are not required to do so, ISPs would develop mechanisms to remove illegal postings that were clearly over the line. By indemnifying them from deleting content, they gave the ISPs an ability to do some of their own policing without fear of repercussions.

For the most part, this system has worked well. Companies are able to make thriving online communities that are accessible and open to everyone. I wouldn’t call them bastions of free speech, but at least the barriers to free speech are relatively low. And some of the companies, with varying degrees of success, try to police the worst of their content. (Though 84 million recent instances of child sexual abuse on Facebook would suggest that there is still work to be done, though.)

A Slippery Slope

Unfortunately, Section 230 protections have slowly been eroding over the time since the law went into effect. Some people have been prosecuted for failing to stop abusive content even when they should have been protected. Other companies have taken to blocking far more than what should have been necessary just to avoid liability (e.g. Pornhub).

I say this is unfortunate because I am generally uncomfortable with private businesses being in the business of policing. When ISPs are encouraged or compelled to police the content of their users, they often set their own rules, and they often go far beyond legal requirements and restrict what should be protected speech.

Police Should Police

It seems to me that when courts or legislators suggest that companies like Facebook should be responsible for their users, they are relinquishing the government’s own responsibility to protect the welfare of their citizens. It is worth making a comparison to the physical world for this one.

Large corporations, retail outlets, or landlords sometimes hire their own private security. But these private forces always operate in a secondary capacity to government police forces. If there is a disturbance in the mall, for example, the private security’s primary task is to call the police, not to intervene. Everyone expects the government police to keep the place safe. So why is it different on Facebook?

You say, but wait, they’re totally different! There’s millions of posts on Facebook every day, but at most there’s maybe a thousand people in the mall. How could someone possibly police Facebook?

This is exactly Facebook’s own argument. Facebook the company is much smaller than its user base, compared to a mall that probably employs as many people as they serve in a day. So the mall is actually in a much better position to provide private security than Facebook is. But I’m not suggesting that they do.

From a perspective of keeping a population safe, there are a few hundred million people in the U.S., period. Some of them are at the mall, and some of them are on Facebook. But it’s a constant number of people. Why can’t the police be responsible for keeping people safe both online and in the mall?

Obviously this is a very complicated issue, as is policing in general. Many would argue that if we do things right in our communities we wouldn’t even need police in the first place. And clearly the police in their present form are failing miserably at keeping communities safe anywhere. But my point is simply that we should not be looking to private companies, whether they are Facebook, or AT&T, or Chipotle, to police us. It’s just a bad idea.

References

  1. Kristof, Nicholas. December 4, 2020. “The Children of Pornhub.” New York Times.
  2. Paul, Kari. December 14, 2020. “Pornhub removes millions of videos after investigation finds child abuse content.” The Guardian.
  3. Brenner, Susan. April 6, 2009. “Section 230 Immunity—Revisited.” Cyb34crim3 Blog.
  4. Goldman, Eric. May 25, 2009. “Web Host Convicted of State Child Porn Crimes Despite 230–People v. Gourlay.” Technology & Marketing Law Blog.

By Kenneth

Kenneth is a graduate student at Wayne State University studying sociology. He is also the host/producer of The Unspeakable Vice Podcast and author of "Lessons Learned: Life-Altering Experiences of Incarceration."

2 replies on “Episode 10: Pornhub and Section 230”

Your thoughts about Section 230 echo those of a reporter interviewed on Fresh Air on Jan. 28 about QAnon. He’s glad that Twitter and others have banned Trump and lots of QAnon posters, but he’s really unhappy about the banning decisions being made by business owners who don’t have the public good as their first priority.

For some other voices on this issue, here is an article from the University. https://michigantoday.umich.edu/2021/01/29/is-having-bad-information-worse-than-having-no-information/

Cliff Lampe asks “how do we step back from” people having their own reality? This gets to the really complicated part of the issue. How do we determine what is OK for people to say and what is not? My opinion that government should be in the business of policing is related to stopping illegal behavior and real, tangible harm. But when it comes to misinformation, unpopular political positions, and extremism, the question of what should be allowed and what shouldn’t gets much more challenging.

Leave a Reply

Your email address will not be published. Required fields are marked *