Existential risk studies explained

Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival.[1] Existential risks are diversely defined as global kinds of calamity that have the potential of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their capacity. The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism.

The historical precursors of existential risks studies can be found in early 19th-century thought around human extinction and the more recent models and theories of global catastrophic risk that date mainly to the Cold War period, especially the thinking around a hypothetical nuclear holocaust. ERS emerged as a distinctive and unified field in the early 2000s, experiencing a rapid growth in the academy and also within the general public with the publication of popular-oriented books. The field has also fostered the creation of a number of foundations, research centers and think tanks, some of which received substantial philanthropic funding and notability within prestigious universities.

Background

See main article: Human extinction and Global catastrophic risk. The idea of existential risks has it prehistory in the speculation on the possibility of human extinction. The prospect of extinction is itself a break from previous religious and mythological eschatology in the measure that it is thought as an absolute and naturalistic event. As such, human extinction is a recent invention in the intellectual history of calamity.

Another precursory trend for existential risks is identifiable in the discourses of scientific concern for catastrophes that emerged primarily in reaction to the invention of nuclear weapons. These early responses attended especially to the possibility of an atmospheric ignition, which was soon dismissed as implausible, as well as the concern with radioactive contamination, which became a substantial and persistent theme in the discussion of possible catastrophic events. The risk engendered by radioactive particles prompted a quick mobilization among scientists and intellectuals, notoriously exemplified by the Russell–Einstein Manifesto, in 1955, which warned about the possibility of a human extinction. As a consequence, the Pugwash Conferences on Science and World Affairs was established with the purposed of reducing the threat of armed conflicts. A similar effort is also exemplified by the creation of the Bulletin of the Atomic Scientists, gathering previous members of the Manhattan project. The bulletin has also created and maintained the iconic Doomsday Clock with the purpose of tracking global catastrophic risk while representing in a temporal fashion.

Human extinction in fiction

Notoriously, Lord Byron was, according to reports, concerned that a comet impact could bring the destruction of humanity, while his poem "Darkness" describes a future in which the Earth becomes lifeless. Mary Shelley's novel The Last Man provides another example of early naturalistic catastrophic imaginations, depicting the story of a man who lived through the death of the rest of humanity in the final decades of the 21st century, caused by many events such as a worldwide plague. The idea itself of the "last man" can be traced to a emerging genre of 19th century literature, originating, most probably, with Jean-Baptiste Cousin de Grainville's work, also titled as The Last Man, published by 1805, where humanity lives through a crisis of infertility. A later rendition of this theme can be found in The Time Machine, published by H.G. Wells in 1895, where a time voyager finds himself 30 million years into a future in which the Earth is nothing but a cold and almost lifeless planet, the reason being the cooling of the sun. Around the same period, Wells wrote two other text on extinction, this time as nonfiction essays, titled "On Extinction" (1893) and "The Extinction of Man" (1897). In the 20th century, human extinction persists as a theme in science fiction. Isaac Asimov not only concerned himself with the possibility of civilizational collapse in his Foundation trilogy, but also wrote a nonfiction book on the subject, titled A Choice of Catastrophes: The Disasters That Threaten Our World, and published in 1979.

History

First wave

The foundational moment of ERS can be dated to the publication of Nick Bostrom's 2002 essay titled "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". In this essay, Bostrom sought to frame human extinction as a topic of philosophic pertinence to the analytic and utilitarian traditions, mainly by dissociating it from past apocalyptical literature and by presenting a schematized and holistic review of possible threats for human survival or, more generally, to its capacity of realizing its own potential, as defined by him and which stands as the canonical definition of existential risk.[1] Conjointly, he attempted to align this study of existential risks with an insight of its overcoming by a prospect of colossal technological development, which would allow human long-term survival through outer space colonization. Most of the essay consists of the proposed classification for existential risks, which is composed by four categories, idiomatically named "Bangs", "Crunches", "Shrieks" and "Whimpers", all inspired by T. S. Eliot poem "The Hollow Men". The categories are organized in a descending order of probability, starting with the outcome that Bostrom considers more probable:

The essay brought Bostrom significant academic recognition, incentivizing the attainment of his professorship at Oxford University as well as the directorship of the now defunct Future of Humanity Institute, in 2005, which he helped to found. The Centre for the Study of Existential Risk was established by Cambridge University in 2012, which prompted its replication in other universities.

This initial rendition of existential risks established what has been termed the 'first wave' of ERS. Described as a instance of technological utopianism which is defined by its expectation, or, as Noah B. Taylor characterizes as a "teleological momentum", of a posthuman vision of the future.

Second wave

The second wave, or generation, of ERS was characterized by its elaboration effort over the foundational work of Bostrom, and was further distinguished by its growing relations and interaction with effective altruism. The emphasis on transhumanism is considered to have been reduced during this period.

Third wave

After its relative institutional consolidation and the expansion of scholar engaged with the field, ERS became increasingly occupied with the issues relating to the diversity of its constituency and the need for a theoretical pluralism in its research. Some scholars of ERS focused on critical examinations of the "historically dominant" approach within the field, termed by some as the "techno-utopian approach". The so-called technological utopianism has formed the theoretical-core of ERS, drawing substantial inspiration from transhumanism, longtermism and the current of utilitarianism known as total utilitarianism. The scholars most critical of this background have claimed that it suffers from intrinsic moral unreliability and methodological flaws, which evidences the demand for new frameworks of ERS, especially the ones that enhances democratic values perceived as lacking in the original formulation.

Related fields

Effective altruism

See main article: Effective altruism. Existential risk studies developed a substantial relation with the effective altruism philanthropic philosophy and community, effectively embracing many of its core ideas as well as attracting a number of effective altruists into the field. Perhaps mostly significantly, the EA community has contributed a momentous amount of financial resources to ERS, fueling the expansion of its academic and popular reputation.

Debate

Critique of technological utopianism

Some scholars within the field of ERS have claimed the need for a more attentive examination of its original theoretical-core and the opening for a theoretical pluralism which seeks to rectify the perceived methodological and moral flaws of this historically dominant approach. This original theoretical base of ERS has been termed by some as the "techno-utopian approach", in reference to the general idea of technological utopianism, and has been defined by its strong bonds with transhumanism, longtermism and the so-called total utilitarianism. In this sense, the premises of such techno-utopian approach are manifested in the three assumption, not explicitly and totally shared by all its adherents, such as - a "(...) maximally technologically developed future could contain (and is defined in terms of) enormous quantities of utilitarian intrinsic value, particularly due to more fulfilling posthuman modes of living"; that its failure would represent a "existential catastrophe"; and, lastly, that the present moral obligation is to ensure the realization of this posthuman future, "(...) including through exceptional actions.". These assumptions are considered particularly essential to the canonical definition of existential risk.

The technological utopianism paradigm of ERS is considered most visible and influential by its articulation in Nick Bostrom's foundational work, both his aforementioned 2002 and 2013 essays, as well as his 2003 paper titled "Astronomical Waste". Popular books by thinkers of existential risks, such as The Precipice, Superintelligence, and What We Owe the Future, have also fostered the public profile of technological utopianism.

Claims of neglected research

Theorists of ERS, Bostrom prominently, have often claimed that 'existential risk' is an understudied subject in academic literature. In an essay from 2013, titled "Existential Risk Prevention as Global Priority", Bostrom remarked that the Scopus database contains 900 papers on dung beetles but fewer than 50 papers when searching for "human extinction". Which confirms, in Bostrom view, the neglected state of research of this subject.

However, other researches have contested and criticized both the premises and conclusions of this claim and the particular experiment that Bostrom used to substantiate it. Joshua Schuster and Derek Woods claimed that the same research, made in March of 2020, did present a marginally improved numbers of papers on human extinction; yet, the search for a commonly related term, "genocide", resulted in 7,166 papers. In a distinct database, JSTOR, the researches 66,809 results for "human extinction", 43,926 for "genocide" and 134,089 for "extinction". Besides that, the search for specific instances of existential risk, such as nuclear war or genetically engineered bioweapons, provide an enormous accumulation of research. Both authors claim that this different is symptomatic of the Bostrom attachment to self-defined criteria and terms for this kind of theme, remaining, according to them, inattentive to the research around human rights and genocide prevention.

Criticism of concepts

Psychologist Steven Pinker has called existential risk a "useless category" that can distract from real threats such as climate change and nuclear war.[2]

Alleged ignorance of genocides

Some scholars consider the concept of existential risk established within ERS to be excessively restrictive and narrow, which discloses an attitude of neglect to the history of genocides, especially the one related with the colonial genocide of indigenous peoples. Nick Bostrom, for example, explicitly states that the start point for anthropogenic existential risks is the period after the end of World War II, with the invention of nuclear weapons.

See also

Notable theorists

Associated institutions

Bibliography

Notes and References

  1. Web site: Guarding Humanity: Mapping the Landscape of X-Risks . Piovesan . Giorgia . June 7, 2023 . Security Distillery .
  2. https://www.science.org/content/article/could-science-destroy-world-these-scholars-want-save-us-modern-day-frankenstein