Niklas “Nick” Bostrom ( BOST-rəm; Swedish: Niklas Boström [¹nɪkːlas ²buːstrœm]; born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term “superintelligence”.
Bostrom believes that superintelligence, which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”, is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects. In 2017 he co-signed a list of 23 principles that all AI development should follow.