If you are afraid of Trump, care about Section 230

Section 230 of the Communications Decency Act was drafted in 1996, and has shaped the foreground of user generated content on the internet---relevant to everything from tweets to sex-trafficking to Airbnb; Section 230 is the single most influential piece of legislation impacting Silicon Valley giants to date. In essence, Section 230 can be described as a release of liability for tech-giants of consumer generated content, with some crucial caveats. In practice, that means that an inflammatory or violent tweet does not make twitter legally liable for the words of that specific user. The law itself states, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” While this provides intuitive security to tech companies seeking to increase the flow of information, it also allows for some glaring gaps in coverage wholly reliant on the goodwill and responsibility of each company. Currently, the government only requires intervention in consumer generated content in the instances of “copyright violations, sex work-related material, and violations of federal criminal law.” This is distinct from the responsibility longstanding court precedent has levied on print media, where the common standard is one of liability. For example, if an opinion piece published in the New York Times was deemed libelous in a court of law, the New York Times could be sued for damages alongside the author. From a legal perspective, Section 230 regulates tech giants to a category in between print media and word of mouth. In mandating action from tech giants narrowly, we are effectively placing the control of the minds of countless Americans in the hands of the fleeting judgements of Silicon Valley’s CEOs. We rely on most tech leaders for their altruism to combat issues like misinformation, cyber-crimes, and foreign manipulation, but in an era where social media is increasingly powerful in the court of public opinion---how can altruism begin to be sufficient? 

The larger crux of this issue, is of course the rampant spread of misinformation brought to light initially in the 2016 and 2020 presidential elections respectively, brought to a dramatic head in the January 6th attacks on the U.S. Capitol. Though the use of the media to garner political support is not a new phenomenon, the use of social media has created an increasingly partisan landscape free of regulation. This issue emerged in 2016 with the Cambridge Analytica Facebook Scandal, where the personal data of tens of millions of Americans was siphoned off Facebook to influence their votes in favor of President Trump and is known broadly as the most infamous data failure in Facebook’s history.  This insider information takes advantage of social media algorithms customized to the interests of the user that tailor news to their political affiliation, often skewing facts with a radical partisan slant. The failure lies in the use of user engagement as a signaler of amplification, versus that of credibility (or social impact). The user engagement approach is a profit maximizer for tech companies, as popularity and confirmation bias is statistically determined to increase usage on social media apps. These trends, which were especially prevalent on Facebook and Twitter, were largely ignored by the social media giants in the weeks following the 2020 election while algorithms directly promoted President Trump’s fraudulent election rhetoric. The trends thus implicated the results of Biden’s election victory, and allowed partisan news networks to fan the flames of the belief that mass fraudulence produced a Democratic win. Coupled with President Trump’s inflammatory rhetoric and the lockstep of the Congressional GOP, Trump was able to fan his voters into a frenzy, inciting violence at the Capitol. Far from an outlier, the January 6th insurrection should be seen as a harbinger of our nation’s political polarization, a division exacerbated by the inaction of tech companies until consequences reach fruition. 

Under Section 230, social media giants are completely within their legal limits in their inaction. Actions taken after highly publicized scandals---including the removal of President Trump from most social media platforms---have been largely voluntary and credited by some critics as an attempt to stave off future reform. In light of overwhelming political and pandemic-related misinformation, tech giants have aimed to strengthen their regulations governing misinformation. Facebook has created new misinformation centers, including a COVID-19 Information Center governing information on the virus, while Twitter has expanded their Code of Conduct and is working to adjudicate the truth of certain spheres of misinformation. Even Youtube has become increasingly vigilant on COVID-19 issues specifically. These dynamic approaches should indicate two things: firstly, that a more robust response combating political misinformation is wholly possible (if damaging to profit margins), and secondly, that under the current mandate of Section 230, we allow these actions to be limited solely to the voluntary commitments of tech giants. This raises the questions, at what point of damage to our democracy will we finally regulate this issue, and at that point will we be too late?

Those who believe the current system to be functional enough to protect society or the average consumer should look no further for corporate apathy than the harrowing case of Herrick v. Grindr, which paints a clear picture of the very real consequences that occur when there is no critical mass of public pressure for corporations to limitedly do the right thing. Matthew Herrick, the plaintiff, is a gay man who after escaping an abusive relationship with an unstable partner was subject to continual mental and sexual abuse through the proxy of Grindr. Utilizing a proliferation of false profiles under Herrick’s name and image, Herrick’s ex-boyfriend Oscar Juan Carlos Gutierrez was able to direct around 1,400 men to Herrick’s home and workplace under the guise of consensual sex. These men were told by Gutierrez under Herrick’s likeness that they were to engage in sexual activity with Herrick, and that his protests were “part of the fantasy”, allowing an unlimited amount of unwitting proxies---“up to 1,400 men, as many as 23 in a day”----to Herrick’s attempted sexual assault. Grindr, a dating app boasting over three million users, “explicitly prohibit[s] the use of their products to impersonate, stalk, harass or threaten,” but were wholly dismissive of Herrick. “I emailed and called and begged them to do something,” Herrick told one interviewer, “with the frustration rising in his voice,” according to the interviewer. His family and friends also contacted Grindr about the fake profiles—­in all, about 50 separate complaints were made to the company, either by Matthew or on his behalf. The only response the company ever sent was an automatically generated email: “Thank you for your report.”  It was not until the New York State Supreme Court demanded immediate injunctive relief in favor of Herrick was Grindr willing to dismantle the false profiles. This injunction was eventually overturned, and faced another legal challenge for over a year, granting Herrick no relief and his abuser Gutierrez a consistent tool with which to agonize him. Under the immunity provided by Section 230, Grindr is completely immune to any legal challenge from Herrick and therefore disincentived from acting reputably. Furthermore, Herrick’s case was dismissed by Judge Caproni of the District Court of New York with prejudice. (Prejudice allows dismissal without a chance of future litigation.) In the words of Herrick’s attorney, “we were suing Grindr for its own product defects and operational failures—and not for any content provided by Matthew’s ex—Grindr was not eligible to seek safe harbor from Section 230. To rule against Matthew would set a dangerous precedent, establishing that as long as a tech company’s product was turned to malicious purposes by a user, no matter how foreseeable the malicious use, that tech company was beyond the reach of the law and tort system.” 

The lack of an institutional safeguard under Section 230 could not be more apparent, and considering the prevalence of tech monopolies, the issue itself grows everyday. In considering the circumstances of the issue, both from a societal and human cost, I strongly urge you to consider how much faith you have in tech’s captains of industry. It is my belief that the domination of the sources fed to our broader electorate is too large of an issue to leave to the moral whims of a handful of Americans. I find it increasingly naive to believe for a second that the altruism of these men is anyhow safer than the institution safeguards we so sorely need, as the power of tech giants is sourced from the profit margins their users generate. This moral hazard is much more alluring than preserving our democracy, which is why it cannot be left to the perpetrators of this issue. 

Photo via Medium

Previous
Previous

How Colombia’s Recent Protests Are Testing the Country’s Fragile Peace

Next
Next

Debate Primer: Standardized Testing