News & Blog
Online Hate Bans Appear To Work, According To Reddit
By Francis West on 19th September 2017
Contrary to concerns that social news platform Reddit would drive abusive user groups to other areas of the site by banning them, research has shown that bans have cut hate speech and bad online behaviour for the longer term.
What Happened?
Back in 2015, social news platform Reddit was receiving criticism for appearing to do nothing to curb online harassment and bullying that was occurring on the site as user numbers grew. Reddit’s own survey at the time showed that 50% of active users would not recommend the site, due to “hateful or offensive content and community”.
This led Reddit to publicly introduce a ‘no harassment policy’ that was designed to prevent attacks against people, not ideas i.e. not to be seen as censorship or curbing free speech. The platform also banned so-called ‘revenge porn’ which was seen as a major online problem at the time.
Hateful Communities Blamed
Much of the blame for the worst behaviour was apportioned to hateful / racist communities on the platform. These groups included racist subreddit /r/coontown and fat-shaming subreddit /r/fatpeoplehate. Reddit, therefore, took the action of banning these communities from the platform altogether.
Research Shows Beneficial Results
Research by the Georgia Institute of Technology, and researchers from Emory University and the University of Michigan has found that banning hate groups caused them to abandon the Reddit platform (rather than go elsewhere within it) at higher than average rates. The researchers also found that levels of hate speech reduced in the group members who stayed, and the communities that the hate groups moved to reported no increase in their levels of abuse. Statistics from the research showed a 90.63% reduction in the usage of manually filtered hate words by the r/fatpeoplehate group, and an 81.08% decrease in the usage of manually filtered hate words by r/CoonTown.
What Does This Mean For Your Business?
The business world works best when customers, investors, and other stakeholders have confidence in companies, brands, products and services. Those businesses that supply platforms for, or enable the sharing / distribution free speech of any kind e.g. social media and web companies, have a common (and commercial) duty to provide a safe online environment for their users e.g. by removing hate speech promptly, and by making their part of the online environment particularly safe for children, young people, and the vulnerable. Surprisingly, given the level of technological expertise and investment in large social media platforms e.g. Facebook and Twitter, they have always struggled to moderate their platforms effectively. Although a ban on hate groups may seem like an obvious answer, fear of being seen to censor and curb free speech (characteristics of authority and governments), and thereby damaging a high value brand may be reasons why major platforms have been perceived as not doing enough. Reddit’s research results and how the platform has turned things around by banning groups, and the proven effectiveness of banning in modifying behaviour, could point the way for other social media platforms.
Online hate speech / hate crimes and bullying are now being widely challenged e.g. Google, GoDaddy, and Cloudflare’s decision to stop serving a neo-Nazi site The Daily Stormer, and the UK Crown Prosecution Service’s move to treat online hate crime as seriously as offences carried out face to face with tougher penalties and sentences for online abuse on social media platforms.
Most marketers will be familiar with Maslow’s Hierarchy of Needs and how important basic safety needs are likely to be for customers of any service. Anything that contributes to a safer online environment (the digital business environment) can therefore only benefit businesses as well as society. Businesses and organisations of all kinds can also help the common purpose of minimising online hate crime through education of their staff / pupils / customers / users / stakeholders about their own policies for the treatment of those discovered to be using hate speech e.g. at work online.
We can all play our own individual part in making the online environment safe for all by reporting hate speech where we find it, and, although the stance of open rights / free speech organisations such as the ORG is important, so is ensuring that the Internet is a safe place for all.
Online Hate Bans Appear To Work, According To Reddit
Contrary to concerns that social news platform Reddit would drive abusive user groups to other areas of the site by banning them, research has shown that bans have cut hate speech and bad online behaviour for the longer term.
What Happened?
Back in 2015, social news platform Reddit was receiving criticism for appearing to do nothing to curb online harassment and bullying that was occurring on the site as user numbers grew. Reddit’s own survey at the time showed that 50% of active users would not recommend the site, due to “hateful or offensive content and community”.
This led Reddit to publicly introduce a ‘no harassment policy’ that was designed to prevent attacks against people, not ideas i.e. not to be seen as censorship or curbing free speech. The platform also banned so-called ‘revenge porn’ which was seen as a major online problem at the time.
Hateful Communities Blamed
Much of the blame for the worst behaviour was apportioned to hateful / racist communities on the platform. These groups included racist subreddit /r/coontown and fat-shaming subreddit /r/fatpeoplehate. Reddit, therefore, took the action of banning these communities from the platform altogether.
Research Shows Beneficial Results
Research by the Georgia Institute of Technology, and researchers from Emory University and the University of Michigan has found that banning hate groups caused them to abandon the Reddit platform (rather than go elsewhere within it) at higher than average rates. The researchers also found that levels of hate speech reduced in the group members who stayed, and the communities that the hate groups moved to reported no increase in their levels of abuse. Statistics from the research showed a 90.63% reduction in the usage of manually filtered hate words by the r/fatpeoplehate group, and an 81.08% decrease in the usage of manually filtered hate words by r/CoonTown.
What Does This Mean For Your Business?
The business world works best when customers, investors, and other stakeholders have confidence in companies, brands, products and services. Those businesses that supply platforms for, or enable the sharing / distribution free speech of any kind e.g. social media and web companies, have a common (and commercial) duty to provide a safe online environment for their users e.g. by removing hate speech promptly, and by making their part of the online environment particularly safe for children, young people, and the vulnerable. Surprisingly, given the level of technological expertise and investment in large social media platforms e.g. Facebook and Twitter, they have always struggled to moderate their platforms effectively. Although a ban on hate groups may seem like an obvious answer, fear of being seen to censor and curb free speech (characteristics of authority and governments), and thereby damaging a high value brand may be reasons why major platforms have been perceived as not doing enough. Reddit’s research results and how the platform has turned things around by banning groups, and the proven effectiveness of banning in modifying behaviour, could point the way for other social media platforms.
Online hate speech / hate crimes and bullying are now being widely challenged e.g. Google, GoDaddy, and Cloudflare’s decision to stop serving a neo-Nazi site The Daily Stormer, and the UK Crown Prosecution Service’s move to treat online hate crime as seriously as offences carried out face to face with tougher penalties and sentences for online abuse on social media platforms.
Most marketers will be familiar with Maslow’s Hierarchy of Needs and how important basic safety needs are likely to be for customers of any service. Anything that contributes to a safer online environment (the digital business environment) can therefore only benefit businesses as well as society. Businesses and organisations of all kinds can also help the common purpose of minimising online hate crime through education of their staff / pupils / customers / users / stakeholders about their own policies for the treatment of those discovered to be using hate speech e.g. at work online.
We can all play our own individual part in making the online environment safe for all by reporting hate speech where we find it, and, although the stance of open rights / free speech organisations such as the ORG is important, so is ensuring that the Internet is a safe place for all.
Comments