Data rescue is a movement among scientists, researchers and others to preserve primarily government-hosted data sets, often scientific in nature, to ward off their removal from publicly available websites. While the concept of preserving federal data existed before, it gained new impetus with the election in 2016 of U.S. President Donald Trump.
The concept of harvesting and preserving federal web pages began as early as 2008, at the conclusion of President George W. Bush's second term, under the name "End of Term Presidential Harvest."[1]
Soon after Trump's election, scientists, librarians and others in the U.S. and Canada—fearing that the administration of Trump (who had denied the validity of the scientific consensus on the existence of climate change[2]) would act to remove scientific data from government websites[3] —began working to preserve those data.
Quickly, the concept of data rescue became a grassroots movement, with organized "hackathon" events at cities across the U.S. and elsewhere, often hosted at universities and other institutions of higher education.
The Guerrilla Archiving Event: Saving Environmental Data from Trump was a meeting arranged by two professors at the University of Toronto in December 2016,[4] [5] in an effort to pre-emptively preserve US government climate data from possible deletion by the Trump Administration.[6] [7]
During his run for presidency, President Trump expressed, in various occasions, climate change denialism[8] calling climate change a "Chinese hoax" "in order to make U.S. manufacturing non-competitive", and voiced his hostility towards climate science and Paris climate agreement.[9] [10] In early December 2016, a prominent climate change denier, Scott Pruitt was selected as a new administrator of the United States Environmental Protection Agency (EPA).[11] These proceedings raised concerns among the academic community that the scientific opinion on climate change might be suppressed during Trump's presidency. Indeed, according to Reuters sources, on 25 January 2017, Trump's administration instructed the United States Environmental Protection Agency to remove climate change page from their website.[12]
Fearing for possible deletion or alteration of the US government websites containing government climate data, as happened in Canada, people from various academic backgrounds and training such as coders, environmental scientists, social scientists, archivists, and librarians gathered together in order to save US government websites at risk of changing or disappearing during or after government transition.[13] [14] The event collaborated with the Internet Archive's End of Term project.[15] [16]
Climate Mirror is "an open project to mirror public climate datasets", that is, an open access project to mirror (to back up) the data of publicly owned datasets from climate science, such as data from U.S. federally funded research. Datasets from the National Oceanic and Atmospheric Administration (NOAA), the Environmental Protection Agency (EPA), and NASA are considered primary examples.
The idea behind Climate Mirror is comparable to the notion that lots of copies keep stuff safe from disappearing through censorship, link rot, lapses of professionalism in preserving the integrity of the scientific record, or lack of digital permanence. It offers a parallel type of massive backup.