Racism on the Internet sometimes also referred to as cyber-racism and more broadly considered as an online hate crime or an internet hate crime consists of racist rhetoric or bullying that is distributed through computer-mediated means and includes some or all of the following characteristics: ideas of racial uniqueness, racist attitudes towards specific social categories, racist stereotypes, hate-speech, nationalism and common destiny, racial supremacy, superiority and separation, conceptions of racial otherness, and anti-establishment world-view.[1] [2] [3] [4] [5] Racism online can have the same effects as offensive remarks made face-to-face.[6]
Cyber racism has been interpreted to be more than a phenomenon featuring racist acts displayed online. According to the Australian Human Rights Commission, Cyber-Racism involves online activity that can include "jokes or comments that cause offense or hurt; name-calling or verbal abuse; harassment or intimidation, or public commentary that inflames hostility towards certain groups".[7]
Though there have been studies and strategies for thwarting and confronting cyber racism on the individual level there have not been many studies that expand on how cyber racism's roots in institutional racism can be combated.[8] An increase in literature on cyber racism's relationship with institutional racism will provide new avenues for research on combatting cyber racism on a systemic level.[9] For example, cyber racism's connections to institutional racism have been noted in the work of Jessie Daniels, a professor of sociology at Hunter College.[10]
Although some tech companies have taken steps to combat cyber racism on their sites, most tech companies are hesitant to take action over fears of limiting free speech.[11] A Declaration of the Independence of Cyberspace, a document that declares the internet as a place free from control by "governments of the industrial world",[12] continues to influence and reflect the views of Silicon Valley.
Online stereotypes can cause racist prejudice and lead to cyber racism. For example, scientists and activists have warned that the use of the stereotype "Nigerian Prince" for referring to advance-fee scammers is racist, i.e. "reducing Nigeria to a nation of scammers and fraudulent princes, as some people still do online, is a stereotype that needs to be called out".[13]
According to CNN, blackfishing occurs when a non-Black celebrity or influencer intentionally alters their physical appearance, by appropriating the skin tone, hair texture and overall aesthetics associated with and/or originating from Black people. It is common on social media. Many non-Black celebrities have been criticized over tanning their skin to appear darker skinned, often times looking more racially ambiguous and/or Black. It is believed that the increase of social media marketing has made space for more contemporary racist microaggressions that involve the monetization of aesthetics associated with Black American culture.
Blackface, the stereotypical practice of caricaturing Black people, has been around since the 19th century. The theatrical minstrel show included White performers participating "comedic", though highly racist, skits, and performances depicting Black people. Performers would often paint their faces black with obnoxiously big red lips as well as talk in early African American Vernacular English to symbolize their perceptions of Black people.[14] The stereotypes portrayed in minstrel shows have been reflected in various forms of media over time such as Hattie McDaniel's role as the motherly, yet desexualized "mammy" in the 1939 film adaptation of the novel Gone with the Wind,[15] or the lazy and inarticulate "coon" caricature.[16] Today, the advancement of technology has allocated the use of GIFs and reaction memes of Black people to portray exaggerated forms emotions online because internet users think of Black people as "excessively expressive and emotional". One of the most commonly used people in GIFs and memes is media mogul Oprah Winfrey, whose clips from her former talk show and occasional TV specials are often created into gifs and memes and are used frequently on the internet.[17]
Racist views are common and often more extreme on the Internet due to a level of anonymity offered by the Internet.[18] [19] In a 2009 book about "common misconceptions about white supremacy online, [its] threats to today's youth; and possible solutions on navigating through the Internet, a large space where so much information is easily accessible (including hate-speech and other offensive content)", City University of New York associate professor Jessie Daniels claimed that the number of white supremacy sites online was then rising; especially in the United States after the 2008 presidential elections.[20]
The popularity of sites used by alt-right communities has allowed cyber racism to garner attention from mainstream media. For instance, the alt-right claimed the "Pepe the frog" meme as a hate symbol after mixing "Pepe in with Nazi propaganda" on 4chan.[21] This gained major attention on Twitter after a journalist tweeted about the association. Alt-right users considered this a "victory" because it caused the public to discuss their ideology.
According to Algorithmic bias algorithms are designed by parsing large datasets, so they often reflect and reinforce societal biases via the biased patterns within the data and then echo them as definitive truths. In essence, the neutrality of the algorithm depends heavily on the neutrality of the data it is created from.[22] The results of discriminatory decisions become part of the foundational datasets. For example, job hiring data is historically discriminatory. When hiring data is embedded in an algorithm, it would determine certain groups to be more suited for the position, perpetuating the historical discrimination. [23]
In her article "Rise of the Alt-Right", Daniels explains how algorithms "speed up the spread of White supremacist ideology" by producing search results that reinforce cyber racism. Daniels posits that algorithms direct alt-right users to sites that echo their views. This allows users to connect and build communities on platforms that place little to no restrictions on speech, such as 4chan. Daniels points to the internet searches of Dylann Roof, a white supremacist, as an example of how algorithms perpetuate cyber racism. She claims that his internet search for "black on white crime" directed him to racist sites that reinforced and strengthened his racist views.
Moreover, Latanya Sweeney, a Harvard professor, has found that online advertisements generated by algorithms tend to display more advertisements for arrest records with African American-sounding names than Caucasian-sounding names. Similarly, Caya Carter’s Honors Thesis lists a few glaringly racist examples of searching specifically ‘black girls’ returned harmful query results on the first page, like “Black Booty on the Beach” and other hyper-sexual responses. Carter also notes through their own findings that a Google search involving varying races of people provided very biased search suggestions with either negative connotations or stereotypes being most associated with black people, and even more so for black women. [24] Nicol Turner Lee writes about a similar situation where search results for ‘black sounding names’ returned arrest record information. Lee also mentions that a few years later there was a situation where a Google search for ‘gorillas’ had returned two Black people. [25]
Daniels writes in her 2009 book Cyber Racism that "white supremacy has entered the digital era" further confronting the idea of technology's "inherently democratizing" nature. Yet, according to Ruha Benjamin, researchers have concentrated on cyber racism's focus on "how the Internet perpetuates or mediates racial prejudice at the individual level rather than analyze how racism shapes infrastructure and design." Benjamin continues by stating the importance of investigating "how algorithms perpetuate or disrupt racism…in any study of discriminatory design."
In Australia, cyber-racism is unlawful under S 18C of the Racial Discrimination Act 1975 (Cth). As it involves a misuse of telecommunications equipment, it may also be criminal under S 474.17 of the Criminal Code Act 1995 (Cth).[26] State laws in each Australian State make racial vilification unlawful, and in most states serious racial vilification is a criminal offense. These laws also generally apply to cyber-racism, for example S 7 "Racial vilification unlawful" and S 24 "Offense of serious racial vilification" of the Racial and Religious Tolerance Act 2001 (Vic) both explicitly state that the conduct being referred to may include the use of the Internet.[27]
In May 2000, after the League Against Racism and Anti-Semitism (la Ligue Internationale Contre le Racisme et I'Antisemitisme-LICRA) and the Union of French Jewish Students (UEJF) brought an action against Yahoo! Inc., which hosted an auction website to sell Nazi paraphernalia, a French judge ruled that Yahoo should stop providing access to French users.[28]