Future of Life Institute explained

42.3736°N -71.1097°W

Future of Life Institute
Abbreviation:FLI
Type:Non-profit research institute
Purpose:Reduction of existential risk, particularly from advanced artificial intelligence
Owners:-->
Leader Title:President
Leader Name:Max Tegmark

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

The founders of the Institute include MIT cosmologist Max Tegmark, UCSC cosmologist Anthony Aguirre, and Skype co-founder Jaan Tallinn; among the Institute's advisors is entrepreneur Elon Musk.

Purpose

FLI's stated mission is to steer transformative technology towards benefiting life and away from large-scale risks.[1] FLI's philosophy focuses on the potential risk to humanity from the development of human-level or superintelligent artificial general intelligence (AGI), but also works to mitigate risk from biotechnology, nuclear weapons and global warming.[2]

History

FLI was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Tufts University postdoctoral scholar Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. The Institute's advisors include computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018).[3] [4] [5]

Starting in 2017, FLI has offered an annual "Future of Life Award", with the first awardee being Vasili Arkhipov. The same year, FLI released Slaughterbots, a short arms-control advocacy film. FLI released a sequel in 2021.[6]

In 2018, FLI drafted a letter calling for "laws against lethal autonomous weapons". Signatories included Elon Musk, Demis Hassabis, Shane Legg, and Mustafa Suleyman.[7]

In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.[8] [9] In response, Tegmark said that the institute had only become aware of Nya Dagbladet's positions during due diligence processes a few months after the grant was initially offered, and that the grant had been immediately revoked.

Open letter on an AI pause

titled "". This called on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter said: "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control".[10] The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[11] [12]

Prominent signatories of the letter included Elon Musk, Steve Wozniak, Evan Sharp, Chris Larsen, and Gary Marcus; AI lab CEOs Connor Leahy and Emad Mostaque; politician Andrew Yang; deep-learning researcher Yoshua Bengio; and Yuval Noah Harari.[13] Marcus stated "the letter isn't perfect, but the spirit is right." Mostaque stated, "I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter." In contrast, Bengio explicitly endorsed the six-month pause in a press conference.[14] [15] Musk predicted that "Leading AGI developers will not heed this warning, but at least it was said."[16] Some signatories, including Musk, said they were motivated by fears of existential risk from artificial general intelligence.[17] Some of the other signatories, such as Marcus, instead said they signed out of concern about risks such as AI-generated propaganda.[18]

The authors of one of the papers cited in FLI's letter, ""[19] including Emily M. Bender, Timnit Gebru, and Margaret Mitchell, criticised the letter.[20] Mitchell said that “by treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.”

Operations

Advocacy

FLI has actively contributed to policymaking on AI. In October 2023, for example, U.S. Senate majority leader Chuck Schumer invited FLI to share its perspective on AI regulation with selected senators.[21] In Europe, FLI successfully advocated for the inclusion of more general AI systems, such as GPT-4, in the EU's Artificial Intelligence Act.[22] In military policy, FLI coordinated the support of the scientific community for the Treaty on the Prohibition of Nuclear Weapons. At the UN and elsewhere, the Institute has also advocated for a treaty on autonomous weapons.[23] [24]

Research grants

The FLI research program started in 2015 with an initial donation of $10 million from Elon Musk.[25] [26] [27] In this initial round, a total of $7 million was awarded to 37 research projects.[28] In July 2021, FLI announced that it would launch a new $25 million grant program with funding from the Russian–Canadian programmer Vitalik Buterin.[29]

Conferences

In 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated by Alan Alda.[30] [31] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[32] [33]

Since 2015, FLI has organised biannual conferences with the stated purpose of bringing together AI researchers from academia and industry., the following conferences have taken place:

In the media

See also

Notes and References

  1. Web site: Future of Life Institute homepage . Future of Life Institute . 9 September 2021 . 9 September 2021 . 8 September 2021 . https://web.archive.org/web/20210908143821/https://futureoflife.org/ . live .
  2. Is Artificial Intelligence a Threat? . Chronicle of Higher Education . 11 September 2014 . 18 Sep 2014 . Chen . Angela . 22 December 2016 . https://web.archive.org/web/20161222004051/http://www.chronicle.com/article/Is-Artificial-Intelligence-a/148763 . live .
  3. Web site: But What Would the End of Humanity Mean for Me? . The Atlantic . 9 May 2014 . 13 April 2020 . 4 June 2014 . https://web.archive.org/web/20140604211145/http://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ . live .
  4. Web site: Who we are . Future of Life Institute . 13 April 2020 . 6 April 2020 . https://web.archive.org/web/20200406150924/https://futureoflife.org/team/ . live .
  5. Web site: Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world . Salon . 5 October 2014 . 13 April 2020 . 18 March 2021 . https://web.archive.org/web/20210318143459/https://www.salon.com/2014/10/05/our_science_fiction_apocalypse_meet_the_scientists_trying_to_predict_the_end_of_the_world/ . live .
  6. News: Walsh . Bryan . The physicist Max Tegmark works to ensure that life has a future . 31 March 2023 . Vox . 20 October 2022 . en . 31 March 2023 . https://web.archive.org/web/20230331053401/https://www.vox.com/future-perfect/23380941/future-perfect-50-max-tegmark-future-of-life-institute-physicist . live .
  7. News: AI Innovators Take Pledge Against Autonomous Killer Weapons . 31 March 2023 . NPR . 2018 . 31 March 2023 . https://web.archive.org/web/20230331053358/https://www.npr.org/2018/07/18/630146884/ai-innovators-take-pledge-against-autonomous-killer-weapons . live .
  8. Web site: Dalsbro . Anders . Leman . Jonathan . 2023-01-13 . Elon Musk-funded nonprofit run by MIT professor offered to finance Swedish pro-nazi group . live . https://web.archive.org/web/20230625040739/https://expo.se/2023/01/elon-musk-funded-nonprofit-run-mit-professor-offered-finance-swedish-pro-nazi-group . 2023-06-25 . 2023-08-17 . Expo .
  9. Web site: Hume . Tim . 2023-01-19 . Elon Musk-Backed Non-Profit Offered $100K Grant to 'Pro-Nazi' Media Outlet . live . https://web.archive.org/web/20230623160433/https://www.vice.com/en/article/93a475/future-of-life-institute-max-tegmark-elon-musk . 2023-06-23 . 2023-08-17 . . en.
  10. News: 2023-03-29 . Elon Musk among experts urging a halt to AI training . en-GB . BBC News . 2023-04-01 . 2023-04-01 . https://web.archive.org/web/20230401115027/https://www.bbc.com/news/technology-65110030 . live .
  11. News: Elon Musk and other tech leaders call for pause in 'out of control' AI race . 30 March 2023 . CNN . 29 March 2023 . en . 10 April 2023 . https://web.archive.org/web/20230410150317/https://www.cnn.com/2023/03/29/tech/ai-letter-elon-musk-tech-leaders/index.html . live .
  12. Web site: Pause Giant AI Experiments: An Open Letter . Future of Life Institute . 30 March 2023 . 27 March 2023 . https://web.archive.org/web/20230327111111/https://futureoflife.org/open-letter/pause-giant-ai-experiments/ . live .
  13. News: Ball . James . 2023-04-02 . We're in an AI race, banning it would be foolish . en . . 2023-04-02 . 2023-08-19 . https://web.archive.org/web/20230819123650/https://www.thetimes.co.uk/article/were-in-an-ai-race-banning-it-would-be-foolish-kl3qrrn6x . live .
  14. News: Musk and Wozniak among 1,100+ signing open letter calling for 6-month ban on creating powerful A.I. . 30 March 2023 . Fortune . March 2023 . en . 29 March 2023 . https://web.archive.org/web/20230329233553/https://fortune.com/2023/03/29/elon-musk-apple-steve-wozniak-over-1100-sign-open-letter-6-month-ban-creating-powerful-ai/ . live .
  15. News: The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess . 30 March 2023 . www.vice.com . March 2023 . en . 30 March 2023 . https://web.archive.org/web/20230330004040/https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess . live .
  16. Web site: Elon Musk . Twitter . 30 March 2023 . en . 30 March 2023 . https://web.archive.org/web/20230330045430/https://mobile.twitter.com/elonmusk/status/1641132241035329536 . live .
  17. News: Rosenberg . Scott . Open letter sparks debate over "pausing" AI research over risks . 31 March 2023 . Axios . 30 March 2023 . en . 31 March 2023 . https://web.archive.org/web/20230331053920/https://www.axios.com/2023/03/30/chatgpt-ai-pause-debate-existential-risk . live .
  18. News: Tech leaders urge a pause in the 'out-of-control' artificial intelligence race . 30 March 2023 . NPR . 2023 . 29 March 2023 . https://web.archive.org/web/20230329223523/https://www.npr.org/2023/03/29/1166896809/tech-leaders-urge-a-pause-in-the-out-of-control-artificial-intelligence-race . live .
  19. Book: Bender . Emily M. . Gebru . Timnit . McMillan-Major . Angelina . Shmitchell . Shmargaret . Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency . On the Dangers of Stochastic Parrots: Can Language Models be Too Big? . 2021-03-03 . en . Virtual Event Canada . ACM . 610–623 . 10.1145/3442188.3445922 . 978-1-4503-8309-7 . free .
  20. News: Kari . Paul . 2023-04-01 . Letter signed by Elon Musk demanding AI research pause sparks controversy . en-GB . . 2023-04-01 . 2023-04-01 . https://web.archive.org/web/20230401063716/https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt . live .
  21. Web site: Krishan . Nihal . 2023-10-26 . Sen. Chuck Schumer's second AI Insight Forum covers increased R&D funding, immigration challenges and safeguards . 2024-03-16 . FedScoop . en-US.
  22. Web site: EU artificial intelligence act not 'futureproof', experts warn MEPs . 2024-03-16 . ScienceBusiness . en.
  23. Web site: Educating about Lethal Autonomous Weapons . 2024-03-16 . Future of Life Institute . en-US.
  24. Web site: Government of Costa Rica . February 24, 2023 . FLI address . Latin American and the Caribbean conference on the social and humanitarian impact of autonomous weapons.
  25. Web site: 15 January 2015 . Elon Musk donates $10M to keep AI beneficial . dead . https://web.archive.org/web/20180228042613/https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/ . 28 February 2018 . 28 July 2019 . Future of Life Institute.
  26. Web site: 15 January 2015 . Elon Musk donates $10M to Artificial Intelligence research . live . https://web.archive.org/web/20150407185511/http://www.slashgear.com/elon-musk-donates-10m-to-artificial-intelligence-research-15364795/ . 7 April 2015 . 26 April 2015 . SlashGear.
  27. Web site: 15 January 2015 . Elon Musk is Donating $10M of his own Money to Artificial Intelligence Research . live . https://web.archive.org/web/20151030202356/http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research . 30 October 2015 . 19 January 2015 . Fast Company.
  28. Web site: 28 October 2015 . New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial . live . https://web.archive.org/web/20190728221400/https://futureoflife.org/2015selection/ . 28 July 2019 . 28 July 2019 . Future of Life Institute.
  29. Web site: 2 July 2021 . FLI announces $25M grants program for existential risk reduction . live . https://web.archive.org/web/20210909151941/https://futureoflife.org/2021/07/02/fli-june-2021-newsletter/ . 9 September 2021 . 9 September 2021 . Future of Life Institute.
  30. Web site: The Future of Technology: Benefits and Risks . 24 May 2014 . Future of Life Institute . 28 July 2019 . 28 July 2019 . https://web.archive.org/web/20190728114618/https://futureoflife.org/2014/05/24/the-future-of-technology-benefits-and-risks/ . live .
  31. Web site: Machine Intelligence Research Institute - June 2014 Newsletter . 2 June 2014 . 19 June 2014 . 3 July 2014 . https://web.archive.org/web/20140703084814/http://intelligence.org/2014/06/01/miris-june-2014-newsletter/ . live .
  32. Web site: FHI News: 'Future of Life Institute hosts opening event at MIT' . Future of Humanity Institute . 20 May 2014 . 19 June 2014 . 27 July 2014 . https://web.archive.org/web/20140727060336/http://www.fhi.ox.ac.uk/fli-mit/ . live .
  33. Web site: The Future of Technology: Benefits and Risks . Personal Genetics Education Project . 9 May 2014 . 19 June 2014 . 22 December 2015 . https://web.archive.org/web/20151222085729/http://www.pged.org/event/the-future-of-technology-benefits-and-risks/ . live .
  34. Web site: AI safety conference in Puerto Rico . Future of Life Institute . 19 January 2015 . 7 November 2015 . https://web.archive.org/web/20151107081150/http://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ . live .
  35. Web site: Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter. Future of Life Institute. 2019-07-28. 2019-08-10. https://web.archive.org/web/20190810020404/https://futureoflife.org/ai-open-letter. live.
  36. Web site: Beneficial AI 2017 . Future of Life Institute . 2019-07-28 . 2020-02-24 . https://web.archive.org/web/20200224010000/https://futureoflife.org/bai-2017/ . live .
  37. Web site: Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots . Metz . Cade . NYT . The private gathering at the Asilomar Hotel was organized by the Future of Life Institute, a think tank built to discuss the existential risks of A.I. and other technologies. . June 9, 2018 . June 10, 2018 . February 15, 2021 . https://web.archive.org/web/20210215051949/https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html . live .
  38. Web site: Asilomar AI Principles . Future of Life Institute . 2019-07-28 . 2017-12-11 . https://web.archive.org/web/20171211171044/https://futureoflife.org/ai-principles/ . live .
  39. Web site: Asilomar Principles . OECD . 2021-09-09 . 2021-09-09 . https://web.archive.org/web/20210909151942/https://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-oheigeartaigh.pdf . live .
  40. Web site: Beneficial AGI 2019 . Future of Life Institute . 2019-07-28 . 2019-07-28 . https://web.archive.org/web/20190728114618/https://futureoflife.org/beneficial-agi-2019/ . live .
  41. Web site: CSER at the Beneficial AGI 2019 Conference . Center for the Study of Existential Risk . 2019-07-28 . 2019-07-28 . https://web.archive.org/web/20190728114617/https://www.cser.ac.uk/news/cser-beneficial-agi-2019-conference/ . live .