Facebook, which last year let ads on its platform promoting a neo-Nazi podcast, is now treating white nationalist and white separatist content as white supremacy in a new policy being implemented next week.
“It’s definitely a positive change, but you have to look at it in context, which is that this is something they should have been doing from the get-go,” civil rights attorney David Brody told Vice Motherboard. “How much credit do you get for doing the thing you were supposed to do in the first place?”
Facebook will direct users who try to post the now prohibited white-identity extremist content will be directed to the nonprofit Life After Hate, which helps people leave hate groups.
This new policy will also take effect on Instagram next week, which is especially meaningful. Instagram has become a major platform for the distribution of white nationalist hate speech and conspiracy theories, second only to YouTube for influence and vitriol.
Twitter also has done little to curb the spread of hate speech on its platform, notably allowing white nationalist political celebrities like Rep. Steve King (R-Iowa) and Richard Spencer to use their platform to spread their ideology. Twitter isn’t alone in failing to enforce its terms of service, but fairly earns its criticism in this regard.
And the platforms here — Facebook, Instagram, Twitter and YouTube — are among those that helped amplify the message of the Christchurch terrorist by disseminating his video of the attack. While many platforms scrambled to address the video, it was often a case of too little, too late.
— Sajid Javid (@sajidjavid) March 15, 2019
Now, it seems at least Facebook is committing to doing better.
“We’ve had conversations with more than 20 members of civil society, academics, in some cases these were civil rights organizations, experts in race relations from around the world,” Facebook counterterrorism policy director Brian Fishman told Motherboard. “We decided that the overlap between white nationalism, [white] separatism, and white supremacy is so extensive we really can’t make a meaningful distinction between them. And that’s because the language and the rhetoric that is used and the ideology that it represents overlaps to a degree that it is not a meaningful distinction.”
It wasn’t just the Christchurch incident that spurred Facebook’s new moves to combat white nationalism. Credit in large part rests with activists and historians who pushed Facebook to act following the platform’s attempt to excuse white nationalism and white separatism when it took issue with white supremacy.
Facebook/Instagram finally banned white nationalism.
Not complaining about the outcome– it's nice of Facebook to finally wake up to "overlap between white nationalism and separatism and white supremacy"– but congrats go to the activists pressuring them, not corporations. ❤️🖤 pic.twitter.com/9eAluM6S9P
— AntiFash Gordon (@AntiFashGordon) March 27, 2019
And therein comes an important and sobering reminder: Social media platforms often prohibit hate speech generally, but the enforcement of that policy leaves much to be desired. Even when some content is flagged and removed, these platforms remain breeding grounds for the kind of far-right radicalization that was responsible for nearly every act of domestic terror last year.
Instagram is where young people go to get radicalized. If Facebook is serious about cracking down on white nationalism, it could have an enormous benefit as it relates to Gen Z’s future domestic terrorists. But we likely won’t know how the social media giant’s words translate into action for some time.
Katelyn Kivel is a contributing editor and senior legal reporter for Grit Post in Kalamazoo, Michigan. Follow her on Twitter @KatelynKivel.