Algorithmic accountability explained

Algorithmic accountability refers to the issue of where accountability should be apportioned for the consequences of real-world actions that were taken on account of algorithms used to reach a decision.[1]

In principle, an algorithm should be designed in such a way that there is no bias behind the decisions that are made during its execution process. That is, the algorithm should evaluate only essential characteristics of the inputs presented, without making distinctions based on characteristics that usually should not be used in a social environment, such as the ethnicity of an individual who is being judged in a court of law. However, this principle may not always respected and on occasions individuals may be deliberately harmed by these outcomes. It is at this point that the debate arises about who should be held responsible for the losses caused by a decision made by the machine: the system itself or the individual who designed it with such parameters, since a decision that harms other individuals due to lack of impartiality or incorrect data analysis will happen because the algorithm was designed to perform that way.[2]

Algorithm usage

The algorithms designed nowadays are spread out in the most diverse sectors of society that have some involvement of computational techniques in their control systems, of the most diverse sizes and with the most varied applications, being present in, but not limited to medical, transportation and payment services.[3] In these sectors, the algorithms embedded in the applications perform activities of natures such as:[4]

The way these algorithms are implemented, however, can be quite confusing. Effectively, algorithms in general behave like black boxes, and in most cases it is not known the process that an input data goes through during the execution of a particular routine, but only the resulting output linked to what was initially entered.[5] In general, there is no knowledge related to the parameters that make up the algorithm and how biased to certain aspects they can be, which can end up raising suspicions about the bias with which an algorithm treats a set of inputs. It depends on the outputs that are generated after the executions and if there is any individual who feels harmed by the result presented, especially when another individual, under similar conditions, ends up getting a different answer. According to Nicholas Diakopoulos:

Wisconsin Supreme Court case

As mentioned before, algorithms are widespread in the most diverse fields of knowledge and make decisions that affect the lives of the entire population. Moreover, their structure and parameters are often unknown by those who are affected by them. A case that illustrates this well was a recent ruling by the Wisconsin Supreme Court regarding so-called "risk assessment" for crime. It was ruled that such a score, which is computed through an algorithm that takes various parameters from individuals, cannot be used as a determining factor for an accused to be arrested. In addition, and more importantly, the court ruled that all reports submitted to judges in such cases should contain an information related to the accuracy presented by the algorithm used to calculate the scores.

This event has been considered a major victory in the sense of how the data-driven society should deal with softwares that operates making decisions and how to make them reliable, since the use of these algorithms in highly complex situations like courts requires a very high degree of impartiality when treating the data provided as input. However, defenders of concepts related to big data argue that there is still much to be done regarding the accuracy presented by the results of algorithms, since there is still nothing concrete regarding how we can understand what is happening during data processing, leaving room for doubt regarding the suitability of the algorithm or those who designed it.

Controversies

Another case where there is the possibility of biased execution by an algorithm was the subject of an article in The Washington Post[6] discussing the passenger transportation tool Uber. After analyzing the data collected, it was possible to verify that the estimated waiting time for users of the service was higher depending on the neighborhood where these individuals lived. The main factors affecting the increase in time were the majority ethnicity and the average income of the neighborhood.

In the above case, environments with a majority white population and with higher purchasing power had lower waiting time rates, while neighborhoods with a population of other ethnicities and lower average income had higher waiting times. It is important, however, to make clear that this conclusion was based on the data collected, not necessarily representing a cause and effect relationship, but possibly a correlation, and no value judgment is made about the behavior adopted by the Uber app in these situations.

In an article published in the column "Direito Digit@l" in Migalhas website,[7] Coriolano Almeida Camargo and Marcelo Crespo discuss the use of algorithms in contexts previously occupied by human beings when making decisions and the flaws that can occur when validating whether the decision made by the machine was fair or not.The great evolution of technology that we are experiencing has brought a wide range of innovations to society, among them the introduction of the concept of autonomous vehicles controlled by systems. That is, by algorithms that are embedded in these devices and that control the entire process of navigation on streets and roads and that face situations where they need to collect data and evaluate the environment and the context where they are inserted in order to decide what actions should be taken at each moment, simulating the actions of a human driver behind the wheel.

In the same article in the excerpt above, Camargo and Crespo discuss the possible problems involving the use of embedded algorithms in autonomous cars, especially with regard to decisions made at critical moments in the process of using the vehicles.

In TechCrunch website, Hemant Taneja wrote:[8]

Possible solutions

Some discussions on the subject have already been held by experts in order to try to reach some viable solution to understand what goes on in the black boxes that "guard" the algorithms. It is advocated primarily that the companies that develop the code themselves, which are responsible for running the data analysis algorithms, should be responsible for ensuring the reliability of their systems, for example by disclosing what goes on "behind the scenes" in their algorithms.

In TechCrunch website, Hemant Taneja wrote:From the excerpt above, it can be seen that one possible way is the introduction of a regulation in the computer sectors that run these algorithms so that there is an effective supervision of the activities that are happening during their executions. However, the introduction of this regulation could end up affecting the software industries and developers, and it would possibly be more advantageous for them if they would willingly open and disclose the content of what is being executed and what parameters are used for decision making, which could even end up benefiting the companies themselves with regard to the way in which the solutions developed and applied by them work.

Another possibility discussed is self-regulation by the developer companies themselves through the software.

In TechCrunch website, Hemant Taneja wrote:

See also

References

  1. Shah. H.. 2018. Algorithmic accountability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 376. 2128. 20170362. 10.1098/rsta.2017.0362. 30082307 . 2018RSPTA.37670362S . 51926550 . free.
  2. Kobie . Nicole . Who do you blame when an algorithm gets you fired? . March 2, 2023 . Wired.
  3. News: Angwin . Julia . Make Algorithms Accountable . The New York Times . August 2016 . March 2, 2023 .
  4. Book: Kroll . Accountable Algorithms . Huey . Barocas . Felten . Reidenberg . Robinson . Yu . University of Pennsylvania . 2016. 2765268 .
  5. Web site: Algorithmic Accountability & Transparency . https://web.archive.org/web/20160121023602/https://www.nickdiakopoulos.com/projects/algorithmic-accountability-reporting/ . January 21, 2016 . March 3, 2023 . Nick Diakopoulos.
  6. News: Stark . Jennifer . Diakopoulos . Nicholas . March 10, 2016 . Uber seems to offer better service in areas with more white people. That raises some tough questions. . March 2, 2023 . The Washington Post.
  7. Web site: Santos . Coriolano Aurélio de Almeida Camargo . Chevtchuk . Leila . October 28, 2016 . Por quê precisamos de uma agenda para discutir algoritmos? . March 4, 2023 . Migalhas . pt.
  8. Web site: Taneja . Hemant . The need for algorithmic accountability . March 4, 2023 . TechCrunch. 8 September 2016 .

Bibliography