Home / Workplace Directory / Content moderation, #FBrape and compassion for the user

Content moderation, #FBrape and compassion for the user

By Steph Guthrie

[Content warning: discussions about rape and violence against women]

Last week, Women, Action and the Media (WAM!) co-launched the #FBrape campaign to persuade Facebook’s advertisers to pull their business unless Facebook changes how it moderates content depicting or glorifying gender-based violence. This campaign comes after years of pressure on Facebook to crack down on misogynistic content, which has been mounting of late. It got me thinking about a lot of things, and one is the nuts and bolts of how humans moderate web content produced and shared by millions of other humans.

In my humble opinion, web content moderation is more art than science. And it’s interdisciplinary. To assess the suitability of in-house or user-generated content, a good community manager must have a strong sense of the law, their employer’s organizational values, and their (existing and target) users’ demographics and tastes. If you’re moderating content for one of the world’s largest and most heavily used sites, user demographics and tastes are about as wide-ranging as the day is long.

“But Steph,” you say. “Facebook has a 17-page document laying out exactly what user-flagged content moderators should remove. It’s pretty cut-and-dried.” Is it, though? Sure, the document is extensive and granular. However, many of the instructions leave ample room for interpretation from the moderator, bringing their own social and cultural values into the mix. If you’re deciding on a murky ethical issue and time is a factor, you’ll probably rely at least a little on your own snap judgments.

For example, the document says hate speech gets a pass if it comes in the form of humour or cartoon humour, “unless humor is not evident in the post/photo.” Doesn’t that stipulation depend to a large degree on what the individual moderator finds funny? An individual moderator’s sense of humour is shaped in part by culture. What if that moderator’s culture has a tendency to make light of violence against women (spoiler alert: many cultures do, including North America’s)? It would seem reasonable to conclude that the moderator might be more likely to see humour in posts about gender-based violence.

It’s important to keep in mind that the relationship between culture and content moderation is a two-way street. As Tarleton Gillespie asks in this excellent essay, “how do we balance freedom of speech with the values of the community, with the safety of individuals, with the aspirations of art and the wants of commerce?” In a multi-jurisdictional landscape like the internet, freedom of speech tends to win the day, and with good reason. As more of our communication takes place in spaces like Facebook, the rules Facebook applies to moderate user content will gradually shape our collective understanding of the boundaries of public discourse. A healthy democracy should keep those boundaries to a minimum.

On the other hand, ever since its earliest days as a cultural imperative, bigots have invoked freedom of speech as an excuse for propagating their bigotry. When our ancestors penned their various constitutions, I somehow doubt they had our right to share images of hog-tied young women with phrases like “tape her and rape her” in mind. It bears noting that Facebook has a long and storied history of relying on rather bro-centric rules for content moderation, particularly when it comes to issues that relate to sex, gender, and specifically women’s bodies. As we’ve blogged about before, this suggests that the team that develops (and perhaps implements) these rules may be itself bro-heavy. This is why I appreciate the clearly defined goals of the Facebook Rape campaign, which are to persuade Facebook to:

  1. Recognize speech that trivializes or glorifies violence against girls and women as hate speech and make a commitment that you will not tolerate this content.
  2. Effectively train moderators to recognize and remove gender-based hate speech.
  3. Effectively train moderators to understand how online harassment differently affects women and men, in part due to the real-world pandemic of violence against women.

That last demand is vital, because it emphasizes the fact that certain elements of cultural context will be more evident to some moderators than others. While there are, of course, exceptions, the real-world pandemic of gender violence tends to be more top-of-mind for women than men. As such, some moderators may need to be educated on what kinds of cultural context might lead them to view content differently, perhaps with more compassion for a greater number of users.

This is what it all comes down to: compassion for the user. This is not only important for moral reasons, but also from a cold, hard bottom-line perspective – don’t you want as many people as possible using your site? Can you afford to lose a chunk of them because site moderators do not possess or apply the perspective required to treat users with compassion? Compassion for the user makes good ethical and business sense, and it requires perspective. The best moderators should be in a constant process of broadening their perspectives to fill in the gaps in their own lived experience. They can do a lot of that just by listening to users, both of their own sites and the internet at large.

Steph Guthrie is the moderator of the MediaTech Commons. She’s an internet animator and a full-time feminist. You can join her at the MediaTech Commons by signing up here. Already a member? Log in here.

Find Member Resources


Popular Topics

Scroll to Top