by Steven White
For example, as the company continues to grow in size, Facebook, which has areported 93 percent usage rate among teens, is an increasing platform for cyberbullying. According to reports from January of this year, Facebook is set to exceed 1 billion users before 2013. Given this rapid growth in population, especially in younger demographics, a need for user protection has created discussion and policy changes within the company.
In light of cyberbullying concerns, Facebook is now giving users the appropriate tools to better communicate their feelings and alleviate the concerns and conflicts that arise with feeling bullied on the Web. Gone are the days of the complicated “Report” feature. Now, users can simply click “This post is a problem” to report harmful material, whether it is textual or an image. This feature reaches outside the norm of reporting or flagging material as inappropriate due to nudity or profane visual content. Users can report posts that shine a negative light on them or are upsetting and are then asked to categorize which way the post is harmful to them to determine the issue and its magnitude. After that, the posts and users are then transferred to the appropriate people to help solve the problem.
The need for additional change is apparent. However, what is not so apparent is the ability to address this need. In an ideal world, cyberbullying would be eradicated entirely. The current solution for monitoring the problem now relies heavily on human moderation. But given the millions of uploads to social networks on a daily basis, this approach is neither feasible nor cost-efficient. What can be accomplished, however, is incorporating a technology that is able to handle this task in a cost-efficient manner that will not compromise the accuracy and speed of the moderation process. This is where new technologies like ImageVision’s can play a pivotal role. (Editor’s Note: The writer is a principal of ImageVision)
Serving as a frontline of defense, ImageVision technology scans for inappropriate language in text that indicates any traces of cyberbullying. The technology is triggered by certain key words and can seamlessly run in the background of social network usage. This occurs automatically and in real-time, as the content is being shared. Content that is regarded as “safe” loads immediately to the network with no further action taken. Suspect videos, images and text are “flagged” for human moderators to review.
By saving human moderation for only the suspect content – not for all content – social networks can serve their users more efficiently and slash operations costs up to 80 percent, all while delivering a more enjoyable, teen-safe social networking experience. Photobucket is already employing this technology to protect its users with strong results.
Nobody deserves to be abused online. Period. Emerging technologies cab help monitor cyberbullying, but it all comes down to the person engaging in the activity. With the increasing usage of social media among teens and available technologies like ImageVision’s, cyberbullying can be put at bay.