Nick Bostrom

Superintelligence

Image source: Future of Humanity Institute

Image license: CC BY-SA 4.0

Year of Birth

1973

Nationality

SE

Field of Knowledge

Philosophy


Twitter

@debatethoughts

Nick Bostrom (; Swedish: Niklas Boström [²buːstrœm]; born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and he is currently the founding director of the Future of Humanity Institute at Oxford University.
Bostrom is the author of over 200 publications, including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002). In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list. Bostrom believes there are potentially great benefits from Artificial General Intelligence, but warns it might very quickly transform into a Superintelligence that would deliberately extinguish humanity out of precautionary self-preservation or some unfathomable motive, making solving the problems of control beforehand an absolute priority. Although his book on superintelligence was recommended by both Elon Musk and Bill Gates, Bostrom has expressed frustration that the reaction to its thesis typically falls into two camps, one calling his recommendations absurdly alarmist because creation of superintelligence is unfeasible, and the other deeming them futile because superintelligence would be uncontrollable. Bostrom notes that both these lines of reasoning converge on inaction rather than trying to solve the control problem while there may still be time.

Wikipedia

Global Ranking
178
Ranking History201820162015201420132012
178171179