The Future of Online Moderation
Since the Cambridge Analytica data scandal in 2016, people have been looking far more closely at the influence held by social media companies due to their ownership and usage of data. People across the political spectrum have been criticizing the immense power held by these companies to not only amplify certain voices but quell and dispose of dissenting views. Whether you agree with their right to do so, it is impossible to deny the immense power very few companies have over the public discourse of the entire planet.
To a large extent, these companies rely on the labor of their users to generate massive revenue. Millions of photos, videos, and text posts are uploaded daily, keeping users engaged, further refining their advertising algorithms that serve targeted ads. In the same vein, companies also heavily rely on users for the moderation of content whether explicitly through user reports or implicitly through the analysis of usage data. This data then gets funneled into black-box algorithms which make decisions about which users and content to recommend and which to suppress and remove. As a result, one of the largest issues when it comes to these systems is the total lack of transparency behind moderation decisions. It is extremely unclear even to the engineers who write them how decisions are made about the content that is served to users.
In the case of Facebook, the company hires tens of thousands of content moderators who decide whether or not certain flagged content is appropriate to be served to users. As one might imagine, the decisions are often final. Worryingly, this process, which is being kept behind closed doors, makes mistakes around 10% of the time, which even Mark Zuckerberg has admitted. Moving forward, we should begin to consider this practice inappropriate and irresponsible. Decisions involving the public should include the public to maintain a certain level of objectivity and positive engagement.
A separate moderation method comes in direct contrast to this through the involvement of regular users. Several other social media sites rely solely on the labor of unpaid users to curate content and enforce decency rules. On Reddit, each subreddit depends on the work of selected moderators to curate content and judge speech, giving the ability back to users to decide what content they tolerate on specific forums. Twitch uses a similar feature where individual streamers select moderators to ensure that their chat abides by site-wide rules, as well as to implement specific rules that vary stream by stream. These methods increase transparency and are crucial to aligning online discourse with human interests, rather than the interests of specific corporations.
One of the more daring projects to close the gap between users and moderation decisions is being tested by Twitter. In January of this year, Twitter announced their pilot program Birdwatch, giving certain users the ability to flag content for misleading claims and defend their reports. For the alpha program, active users residing in the US with verified identities across the political spectrum were selected to participate, giving the ability to label but not remove tweets deemed misleading. While at first glance this program seems rife for further widespread manipulation, it really demonstrates the growing trend of social media companies democratizing moderation decisions. Twitter’s openness to researchers accessing and manipulating this data is certainly welcomed, as the metaphorical curtain is pulled on these decisions and potential manipulation can be easily discovered. Thus with this program, we see an example of the growing trend towards more transparent moderation processes, whether that is out of these companies’ desires to be more civically responsible or, perhaps more cynically, to extract additional labor from the general public and retain and engage users. While it is clear that additional educational resources and sessions can increase the effectiveness of such a public reporting program. The decisions that Twitter makes after the research from the alpha of the Birdwatch program will have the potential to dictate what becomes expected of us as informed and responsible internet users.
To secure a more transparent and democratic future, we as citizens should be more demanding of social media companies and have full control over what happens with our data. In the EU in 2016, the General Data Protection Regulation (GDPR) introduced new provisions protecting the user. A law of this sort would certainly be beneficial in giving some of the power from internet companies back to the consumer in the US and should be enacted as well. For you the reader, there are certain tools that you can use right now that give you back some control over your data. AdNauseum is a browser extension that simulates a click to every served ad, rendering recommendation services useless. You can also make use of ad blockers that prevent advertisements from being served to you on these sites. Being a responsible netizen is crucial, but personal responsibility will only go so far as entrenched systems will allow. We should demand to break free from the tyrannical reign of these social media giants and advocate for policies that diminish their chokehold over public discourse and communication.
Image credit: "social media" by Sean MacEntee is licensed under CC BY 2.0