Digital redlining is the practice of creating and perpetuating inequities between already marginalized groups specifically through the use of digital technologies, digital content, and the internet.[1] The concept of digital redlining is an extension of the practice of redlining in housing discrimination,[2] [3] a historical legal practice in the United States and Canada dating back to the 1930s where red lines were drawn on maps to indicate poor and primarily black neighborhoods that were deemed unsuitable for loans or further development, which created great economic disparities between neighborhoods.[4] [5] The term was popularized by Dr. Chris Gilliard, a privacy scholar, who defines digital redlining as "the creation and maintenance of tech practices, policies, pedagogies, and investment decisions that enforce class boundaries and discriminate against specific groups".[6] [7]
Though digital redlining is related to the digital divide and techniques such as weblining and personalization, it is distinct from these concepts as part of larger complex systemic issues.[8] [9] It can refer to practices that create inequities of access to technology services in geographical areas, such as when internet service providers decide to not service specific geographic areas because they are perceived to be not as profitable and thus reduce access to crucial services and civic participation. It can also be used to refer to inequities caused by the policies and practices of digital technologies. For instance, with these methods inequities are accomplished through divisions that are created via algorithms which are hidden from the technology user; the use of big data and analytics allow for a much more nuanced form of discrimination that can target specific vulnerable populations.[10] These algorithmic means are enabled through the use of unregulated data technologies that apply a score to individuals that statistically categorize personality traits or tendencies which are similar to a credit score but are proprietary to the technology companies and not under outside oversight.[11]
While the roots of redlining lie in excluding populations based on geography, digital redlining occurs in both geographical and non-geographical contexts. An example of both contexts can be found in the charges brought against Facebook on March 28 of 2019, by the United States Department of Housing and Urban Development (HUD). HUD charged Facebook with violating the Fair Housing Act of 1968 by "encouraging, enabling, and causing housing discrimination through the company's advertising platform."[12] HUD stated that Facebook allowed advertisers to “exclude people who live in a specified area from seeing an ad by drawing a red line around that area.” The discrimination called out by HUD included those that were racist, homophobic, ableist, and classist. Besides this example of geographically based digital redlining, HUD also charged that Facebook used profile information and designations to exclude classes of people. The charges stated: "Facebook enabled advertisers to exclude people whom Facebook classified as parents; non-American-born; non-Christian; interested in accessibility; interested in Hispanic culture; or a wide variety of other interests that closely align with the Fair Housing Act’s protected classes" Several media outlets pointed out HUDs own history of housing discrimination through redlining, the establishment of the Fair Housing Act to combat redlining, and how the digital platform was recreating this discriminatory practice.[13] [14] [15] [16]
Although digital redlining refers to a complex and varied set of practices, it has been most commonly applied to practices with a geographical dimension. Common examples include when an internet service providers decide to not service specific geographic areas because those areas are seen to be not as profitable, resulting in discrimination against low-income communities, with resulting impacts on access to crucial services and civic participation.[17] [18] AT&T has faced specific scrutiny for this form of digital redlining, it has been reported that AT&T has been classist in its offerings of broadband internet service in areas that are more impoverished. [19]
Geographically based digital redlining can also apply to digital content or the distribution of goods sold online. Geographically based games such as Pokémon Go have been shown to offer more virtual stops and rewards in geographic areas that are less ethnically and racially diverse. [20] In 2016, Amazon was rebuked for not offering their Prime same-day delivery service to many communities that were largely African American and had incomes that were beneath the national average.[21] [22] Even services such as email can be impacted, with many email administrators creating filters for flagging particular email messages as spam based on the geographical origin of the message.[23]
Although often aligned with discrimination that falls into a geographically based context digital redlining also refers to when vulnerable populations are targeted for or excluded from specific content or access to the internet in a way that harms them based on some aspect of their identity. Trade schools and community colleges, which typically have a more working class student body, have been found to block public internet content from their students where elite research institutions do not.[24] The use of big data and analytics allow for a much more nuanced form of discrimination that can target specific vulnerable populations.[25] For example, Facebook has been criticized for providing tools that allow advertisers to target ads by ethnic affinity and gender, effectively blocking minorities from seeing specific ads for housing and employment.[26] [27] [28] In October 2019, a major class action lawsuit was filed against Facebook alleging gender and age discrimination in financial advertising.[29] [30] A broad array of consumers can be particularly vulnerable to digital redlining when it is used outside of a geographical context. Besides targeting vulnerable populations based on traditional and legally recognized classifications such as race, gender, age, etc., it has been shown that personal data mined and then resold by brokers can be used to target those who have been identified as suffering from Alzheimer's or dementia, or simply identified as impulse buyers or gullible.[31] [32]
Earlier distinctions have been made between weblining—the process of charging customers different prices based on profile information --- and internet or digital redlining, with digital redlining being focused not on pricing but access. As early as 2002 the Gale Encyclopedia of E-Commerce puts forth the distinction more in use today: weblining is the pervasive and generally accepted (or at least tolerated) practice of personalizing access to products and services in ways invisible to the user; digital redlining is when such personalized, data-driven schemes perpetuate traditional advantages of privileged demographics.[33] As weblining has become more ubiquitous, the term has fallen out of use in favor of the more general term personalization.
Scholars have often drawn connections between the digital divide and digital redlining.[34] In practice, the digital divide is seen as one of a number of impacts of digital redlining, and digital redlining is one of a number of ways in which the divide is maintained or extended.[35]
A 2001 report looked to find if the reason for a gap in access to broadband internet by low-income and minority populations was due to a lack of availability or due to other factors. The report found that there was "little evidence of digital redlining based on income or black or Hispanic concentrations" but that there was mixed evidence of redlining based on areas in which Native American or Asian populations were larger.