Online hate speech and its real world consequences

This is a part of my literature review for my PhD thesis


Hate speech is a rhetorical device used in political campaigning by populists. Hate speech usually gives emphasis on us versus the “other” (Yachyshen and Mather, 2020). On social media, where people are highly connected,  hate speech easily proliferates and can intensify quickly (Johnson et al., 2019). Online hate speech is grounded in divisions created in the real-world (Pohjonen and Udupa, 2017) and has real world impacts especially in Black and Muslim communities (Williams et al., 2020). Luqman’s (2018) study shows that Trump’s hate rhetoric against minorities found a correlation with an increase of hate crimes and his use of Twitter has led other profiles to tweeting extremist content (Yachyshen and Mather, 2020).

According to Siegel, (2020) while there is no one definition of hate speech, it is considered to be “bias-motivated, hostile, and malicious language targeted at a person or group because of their actual or perceived innate characteristics.” Waltman (2018) adds that hate speech “is an attempt to vandalize the other’s identity to such an extent that the very legitimacy and humanity of the other is called into question.”

Social media channels have different definitions of hate speech. YouTube’s (2018) guidelines states that hateful contents include condoning violence against groups or individuals based on race or ethnic origin, religion, disability, religion, disability, age, gender, nationality, veteran status, or sexual orientation and gender identity. Twitter’s (2018) guidelines identify hateful conduct as promoting violence against or directly attacking people based on race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.  Facebook’s (2018) definition of hateful content is content that directly attacks people based on race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, disability or diseases. Facebook does not include incitement to violence.

The Internet enables hate speech to be disseminated efficiently, reaching new audiences and allowing for a sense of community among hate groups (Bowman-Grieve, 2009). People who are targeted  are attacked based on ethnicity, physical appearance, sexual orientation and gender identity, and class (Silva et al, 2016). Ibister et al. (2018) also identify public figures like journalists, politicians, artists as targets of hate speech. There are consequences to being exposed to hate speech including fear, especially for marginalised groups (Hinduja and Patchin, 2011), withdrawing from public debates (Henson et al, 2013), and avoiding political talk (Barnidge et al, 2019). Online hate speech can also lead to an increase in racial hate crimes (Chan et al., 2016) such as against Muslims (Stephens-Davidowitz, 2017), refugees (Muller and Schwartz, 2019).

The studies on hate speech take different approaches. Because there is no one definition of hate speech, there is also no consensus on the most effective way to detect it (Siegel, 2020). Some studies use binaries, identifying whether hate speech is present or not present (Davidson et al., 2017), some use text mining or natural language processing (Fortuna and Nunes, 2018) through dictionary-based methods to identify hate speech (Liu and Forss, 2015; Isbister et al., 2018). Warner and Hirschberg (2012) used distance metrics to detect hate speech that may have been misspelled while Magu et al. (2017) have included code words alluding to out-groups in their dictionary. Most studies are also focused on English language content.

Uhyeng and Carley’s (2020) study on hate speech in the Philippines during the COVID19 pandemic shows that political identities were targets of hateful discussions on Twitter and humans, rather than bots, engaged more in these discussions. This hate speech can be traced to a more polarised public and the failure of the government to curb the pandemic, as well as the ongoing tensions between the Philippines and China over the West Philippine Sea territorial claims (Uhyeng and Carley, 2020).

How have governments responded to online hate speech? Currently there are two trajectories that seem to dominate response to hate speech: first, moderating content and banning accounts and communities and second, employing counter-speech to drown out hate speech. According to Siegel (2020), banning user accounts may be counterproductive, stirring more support from those who are sympathetic to hateful groups. “When well-known users come under fire, people who hold similar beliefs may be motivated to rally to their defense and/or express views that are opposed by powerful companies or organizations” (Siegel, 2020, p. 72). Because banning users and censoring hate speech might go against the right to free speech, counter-speech is the preferred way to suppress hate speech (Gagliardone et al., 2015). An example of counter-speech was a “peace propaganda” campaign for the 2013 Kenyan elections where incentives were given to Kenyans who sent messages of peace to each other online (Benesch, 2014).

Online hate speech has become seemingly proliferated with the rise of social media use. Evidence shows that online hate speech as real world consequences and can incite violence against minority groups. Given this, policymakers should find a way on how to combat hate speech online without sacrificing everyone’s right to free speech.