The Facebook XCheck
System Has Been Abused
In response to the recent Facebook XCheck news, Chris Riley, R Street Senior Fellow for Technology and Innovation, issued the following statement:
“With XCheck, Facebook tried a high-wire act of paying special attention to high-visibility accounts—but as has often been the case, the company lost its balance, and the result is yet another negative news cycle. There are arguments to be made for giving additional protections for some users, but they need to be made in the open, and it seems likely the balance would come out differently. If Facebook’s goal in developing XCheck is to protect those users most likely to be caught inadvertently by an automated moderation system, then the program must also encompass those likely to be victims of intentional abuse of such a system. This includes both conservative and liberal voices, and many others who are the frequent targets of online harm and harassment.
“It is not clear the totality of who the program needed to protect was taken into consideration, and the lack of transparency into the XCheck program means the public at large cannot evaluate the details or the equity balance inherent in the system as implemented in practice. And that has to be rectified.”
“However, it is equally important to separate out the potential harms from a lack of transparency into the XCheck system from those harms that are inherent to the system itself regardless of its transparency. One widely accepted fact that is also routinely forgotten is moderation is a fundamentally imperfect exercise. There is no perfect moderation system, whether automated, manual or a mixture of the two. Only by consistently investing more time, money and resources can the systems be improved. Thus the standard of review into whether a moderation system is ‘good’ must look at that investment and its output—while recognizing that mistakes are inevitable.”
“High-visibility Facebook accounts, by their nature of being shared more widely and producing more content that users engage with, are certainly reported to automated moderation systems more frequently than some others. Facebook then has to balance the high cost associated with an errant takedown of content it would like to see left online, such as with political figures and journalists. In that instance, it is not unreasonable to imagine such a system as XCheck carrying real benefit.”
“But the context matters here—and with what we know so far, the XCheck system has been abused and, lacking full transparency into the program, appears now to have served as another moment where Facebook’s dedication to improving its moderation practices consistently have been called into question.”
Given such challenges, Riley concluded: “The short-term solution isn’t immediately clear but the medium- and long-term ones remain the same: Transparency and trust are the key pillars around which the industry must continue building itself, and there remain deep challenges for the entire industry, with Facebook essentially the poster child of the modern dynamic. But if transparency into content moderation systems can be increased and if good multi-stakeholder conversations can be conducted around them, there is room for collaboration and substantial improvement as compared to the
unsustainable status quo.