Social media sites: Personal advertisers for white supremacist groups

Scout Hutchinson, Columnist

Much of the world rejoiced on Jan. 8, 2021, when Twitter announced the permanent suspension of @realDonaldTrump from their platform. Twitter cited misinformation and the events of the capital riots that took place just two days prior. 

While the tweets that helped initiate the Capital Riots on the January 6 were of special concern, as they incited violence within a government building, these tweets were eerily similar to what we had grown accustomed to from the former president. For four years, Donald Trump was seemingly able to use platforms such as Twitter and Instagram to do nothing but spread large amounts of mass misinformation and attack many minority groups. 

Throughout its rise over the last two decades, we have used social media as a form of protest and organization via a seemingly instantaneous line of communication. This has led to mass protests and revolutions that created a sense of connectedness and power through a screen. Sadly, what can be used as a form of social activism one day can help create a gross misinterpretation of free speech and propaganda for white supremacy on another. 

Online hate groups have found refuge within social media sites such as Facebook and Twitter for years. Not only have they been able to spread hate almost instantaneously to large groups of people, but they have also created platforms for groups to grow and form larger collectives, allowing for their hate to reach all four corners of the world with a click of a button.

The eras of Donald Trump and COVID-19 have finally shed some light on the serious implications of these sites acting as if hate speech was the same thing as free speech. Interestingly, the First Amendment does not apply to privately-owned tech companies and social media sites, as they are allowed to exercise their own freedom of speech by creating their own guidelines. 

However, according to a Forbes article, these guidelines seem to flag anti-racist rhetoric and continue to allow for white supremacy to be amplified, meaning that this is much larger than a gross misinterpretation of the limits of free speech. 

Even with the steps to create guidelines and accountability, many social media and online forums are still almost completely unregulated, which has continued to allow for the spread of violence to grow. Hate speech and violence also perfectly fit into social media algorithms created for profit, which allows for these groups to use sites as their own private advertiser, while the site itself continues to make money. Like fake news, hate speech can create more traction and they are both directly tied to one another when it comes to radicalization. 

According to the Washington Post, the free reign of hate speech and organization online can be directly tied to real-life violence, something that is not protected under the 1st Amendment. The riots at the capital are only one example of this. Social media allows for a constant stream of propaganda for white supremacist groups that not only rallies their base but also normalizes the rhetoric they use. This normalization is a step to radicalization, meaning that these posts increase the likelihood for more people to join these organizations and become empowered to act. 

This is not to say that if we put harsher restrictions on social media when it comes to misinformation that white supremacy groups will automatically disappear. However, it would create a barrier to one of their main lines of communication and advertising. It would also make it much harder for hate to organize and would allow for more oversight on Facebook, Twitter and other sites. While the banning of Donald Trump was an important first step in addressing online hate and misinformation, we cannot just cut off the head and hope that two or three do not grow back in its place.