Facebook Acts on Hate Pages, Brands Must Act on Hate Content
By
Devin Redmond

In reaction to activists and rights groups, Facebook recently announced measures to target hate speech earlier this month. Although this big step for Facebook includes better identifying and removing hate-oriented accounts, its announced measures have even greater implications for major brands active in social media.

Organizations create brand pages as part of their owned-media marketing programs while investing heavily in growing and engaging in their new social communities. A brand’s owned social presence is the responsibility of the company or organization, not Facebook or the social network. Thus, as Facebook now assumes greater responsibility for enforcing that only socially responsible accounts are created, it puts the onus and spotlight of policing content on legitimate, branded accounts on brands themselves.

Until now, brands have been somewhat half-heartedly protecting their audiences from abuse, exploit and offense. A lack of awareness of the issue coupled with rapid audience growth and a significant cost of both internal and external manual content moderation services (the latter of which adds additional complexities around privacy, compliance, and risk) have presented a significant challenge to organizations in keeping hate and other offensive content off their pages. However, this is all about to change.

Under pressure by Women, Action, and the Media (WAM!) and other organizations, Facebook’s bold step to cleanse its network of accounts that sponsor hate speech is a big impetus for brands to act on the content on their managed pages. It’s not just peer pressure, it’s ‘social pressure’ that has implications on company Social Media ROI.

Many brands already have social responsibility standards that also need to be applied to social media. Earlier in the month—in the same week, mind you, that the aforementioned WAM! / Facebook story surfaced—the NBA and the Indiana Pacers swiftly addressed an incident of hate and intolerance when the Pacer’s Roy Hibbert was fined $75,000 by the NBA for using homophobic speech. Hibbert immediately and wholeheartedly apologized, while the NBA said its players and representatives should always be held to the highest standard.

Unfortunately, while the NBA proactively seeks to reduce hate speech, its broader social stance still differs from its own social media reality. In fact, during the same week the NBA’s main Facebook pages and YouTube channels, as well as those of the Indiana Pacers, all continued to host thousands of instances of homophobic comments, racial epitaphs, and other abusive language. At the same time, the Miami heat have been lauded for their social media ‘success’ during the playoffs and the NBA hosted its second annual social media awards, their accounts—including those of the Heat and the Spurs—continue to be plagued by thousands of instances of offensive, derogatory, and exploitative content.

To be clear, the hate speech on the NBA’s social media properties isn’t made by the NBA itself, nor is it likely their opinion. That this hateful content is continually hosted, however, on all of the NBA and Pacers’ owned, promoted, and sponsored properties presents a serious problem for their brands. Although social media has experienced exponential growth, the challenge to moderate content on branded pages simply isn’t insurmountable anymore. It is reasonable to expect any brand that is socially responsible and wants to create a trusted environment for its customers will address bad content on its sponsored social media accounts.

Fortunately for brands like the NBA, there are effective technologies they can use to automate, improve, and reduce the cost of enforcement of acceptable use content policies and precedents like the US Act 230 regarding protecting sponsored forums. Aside from creating and posting acceptable use policies (see an example) on their social media properties, brands should look to bad-content removal technologies to efficiently enforce free and safe speech in their social communities. These technologies operate with support of the social networks, who have purposefully enabled APIs for technology providers to remove of abusive, hateful, and malicious content.

Facebook enforcing its Community Standards will no doubt lead the way for brands to follow on their pages and accounts.  As WAM!, Facebook, the NBA, and many others have demonstrated recently, there is a will.  And, thanks to technology, there is a way.

 


Devin Redmond is a seasoned security and technology executive with more than 17 years of experience in public and private companies. Prior to co-founding and becoming CEO of Nexgate, he held executive and leadership roles in product management, marketing, and business development at Websense, Neoteris, Check Point, and Real Networks. Devin frequently speaks and writes about leveraging technology to address the new security, compliance, and management challenges organizations face in today’s cloud, mobile, and social environments.  Find him on Twitter: @DevinHRed