Future of Humanity Institute
Formation | 2005 |
---|---|
Dissolved | April 16, 2024 |
Purpose | Research big-picture questions about humanity and its prospects |
Headquarters | Oxford, England |
Director | Nick Bostrom |
Parent organization | Faculty of Philosophy, University of Oxford |
Website | futureofhumanityinstitute.org |
The Future of Humanity Institute (FHI) was an
Sharing an office and working closely with the Centre for Effective Altruism, the institute's stated objective was to focus research where it can make the greatest positive difference for humanity in the long term.[3][4] It engaged in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations. The centre's largest research funders included Amlin, Elon Musk, the European Research Council, Future of Life Institute, and Leverhulme Trust.[5]
The Institute was closed down on 16 April 2024, having "faced increasing administrative headwinds within the Faculty of Philosophy".[6][7]
History
Nick Bostrom established the institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School.
Bostrom and bioethicist Julian Savulescu also published the book Human Enhancement in March 2009.[9] Most recently, FHI has focused on the dangers of advanced artificial intelligence (AI). In 2014, its researchers published several books on AI risk, including Stuart Armstrong's Smarter Than Us and Bostrom's Superintelligence: Paths, Dangers, Strategies.[10][11]
In 2018,
Existential risk
The largest topic FHI has spent time exploring is global catastrophic risk, and in particular existential risk. In a 2002 paper, Bostrom defined an "existential risk" as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".[13] This includes scenarios where humanity is not directly harmed, but it fails to colonize space and make use of the observable universe's available resources in humanly valuable projects, as discussed in Bostrom's 2003 paper, "Astronomical Waste: The Opportunity Cost of Delayed Technological Development".[14]
Bostrom and Milan Ćirković's 2008 book Global Catastrophic Risks collects essays on a variety of such risks, both natural and anthropogenic. Possible catastrophic risks from nature include super-volcanism, impact events, and energetic astronomical events such as gamma-ray bursts, cosmic rays, solar flares, and supernovae. These dangers are characterized as relatively small and relatively well understood, though pandemics may be exceptions as a result of being more common, and of dovetailing with technological trends.[15][4]
Synthetic pandemics via weaponized
In 2020, FHI Senior Research Fellow Toby Ord published his book The Precipice: Existential Risk and the Future of Humanity, in which he argues that safeguarding humanity's future is among the most important moral issues of our time.[19][20]
Anthropic reasoning
FHI devotes much of its attention to exotic threats that have been little explored by other organizations, and to methodological considerations that inform existential risk reduction and forecasting. The institute has particularly emphasized anthropic reasoning in its research, as an under-explored area with general epistemological implications.
Anthropic arguments FHI has studied include the
A recurring theme in FHI's research is the Fermi paradox, the surprising absence of observable alien civilizations. Robin Hanson has argued that there must be a "Great Filter" preventing space colonization to account for the paradox. That filter may lie in the past, if intelligence is much more rare than current biology would predict; or it may lie in the future, if existential risks are even larger than is currently recognized.
Human enhancement and rationality
Closely linked to FHI's work on risk assessment, astronomical waste, and the dangers of future technologies is its work on the promise and risks of human enhancement. The modifications in question may be biological, digital, or sociological, and an emphasis is placed on the most radical hypothesized changes, rather than on the likeliest short-term innovations. FHI's bioethics research focuses on the potential consequences of gene therapy, life extension, brain implants and brain–computer interfaces, and mind uploading.[21]
FHI's focus has been on methods for assessing and enhancing human intelligence and rationality, as a way of shaping the speed and direction of technological and social progress. FHI's work on human irrationality, as exemplified in
Selected publications
- ISBN 1526600218
- Nick Bostrom: Superintelligence: Paths, Dangers, Strategies, 2014. ISBN 0-415-93858-9
- Nick Bostrom and Milan Cirkovic: Global Catastrophic Risks, 2011. ISBN 978-0-19-857050-9
- Nick Bostrom and Julian Savulescu: Human Enhancement, 2011. ISBN 0-19-929972-2
- Nick Bostrom: Anthropic Bias: Observation Selection Effects in Science and Philosophy, 2010. ISBN 0-415-93858-9
- Nick Bostrom and Anders Sandberg: Brain Emulation Roadmap, 2008.
See also
References
- ^ a b "Humanity's Future: Future of Humanity Institute". Oxford Martin School. Archived from the original on 17 March 2014. Retrieved 28 March 2014.
- ^ "Staff". Future of Humanity Institute. Retrieved 28 March 2014.
- ^ "About FHI". Future of Humanity Institute. Archived from the original on 1 December 2015. Retrieved 28 March 2014.
- ^ a b Ross Andersen (25 February 2013). "Omens". Aeon Magazine. Retrieved 28 March 2014.
- ^ "Support FHI". Future of Humanity Institute. 2021. Archived from the original on 20 October 2021. Retrieved 23 July 2022.
- ^ "Future of Humanity Institute". web.archive.org. 17 April 2024. Retrieved 17 April 2024.
- ^ Maiberg, Emanuel (17 April 2024). "Institute That Pioneered AI 'Existential Risk' Research Shuts Down". 404 Media. Retrieved 17 April 2024.
- ^ "Google News". Google News. Retrieved 30 March 2015.
- ^ Nick Bostrom (18 July 2007). Achievements Report: 2008-2010 (PDF) (Report). Future of Humanity Institute. Archived from the original (PDF) on 21 December 2012. Retrieved 31 March 2014.
- ^ Mark Piesing (17 May 2012). "AI uprising: humans will be outsourced, not obliterated". Wired. Retrieved 31 March 2014.
- ^ Coughlan, Sean (24 April 2013). "How are humans going to become extinct?". BBC News. Retrieved 29 March 2014.
- Open Philanthropy Project (July 2018). "Future of Humanity Institute — Work on Global Catastrophic Risks".
- ^ Nick Bostrom (March 2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology. 15 (3): 308–314. Retrieved 31 March 2014.
- S2CID 15860897. Retrieved 31 March 2014.
- ^ a b Ross Andersen (6 March 2012). "We're Underestimating the Risk of Human Extinction". The Atlantic. Retrieved 29 March 2014.
- ^ Kate Whitehead (16 March 2014). "Cambridge University study centre focuses on risks that could annihilate mankind". South China Morning Post. Retrieved 29 March 2014.
- ^ Jenny Hollander (September 2012). "Oxford Future of Humanity Institute knows what will make us extinct". Bustle. Retrieved 31 March 2014.
- ^ Nick Bostrom. "Information Hazards: A Typology of Potential Harms from Knowledge" (PDF). Future of Humanity Institute. Retrieved 31 March 2014.
- ^ Ord, Toby. "The Precipice: Existential Risk and the Future of Humanity". The Precipice Website. Retrieved 18 October 2020.
- ^ Chivers, Tom (7 March 2020). "How close is humanity to destroying itself?". The Spectator. Retrieved 18 October 2020.
- ^ Anders Sandberg and Nick Bostrom. "Whole Brain Emulation: A Roadmap" (PDF). Future of Humanity Institute. Retrieved 31 March 2014.
- ^ "Amlin and Oxford University launch major research project into the Systemic Risk of Modelling" (Press release). Amlin. 11 February 2014. Archived from the original on 13 April 2014. Retrieved 31 March 2014.
- ^ "Amlin and Oxford University to collaborate on modelling risk study". Continuity, Insurance & Risk Magazine. 11 February 2014. Retrieved 31 March 2014.