Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Fundamental Issues of Artificial IntelligenceFuture Progress in Artificial Intelligence: A Survey of Expert Opinion

Fundamental Issues of Artificial Intelligence: Future Progress in Artificial Intelligence: A... [There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

Fundamental Issues of Artificial IntelligenceFuture Progress in Artificial Intelligence: A Survey of Expert Opinion

Part of the Synthese Library Book Series (volume 376)
Editors: Müller, Vincent C.

Loading next page...
 
/lp/springer-journals/fundamental-issues-of-artificial-intelligence-future-progress-in-AytKJyYMqP

References (25)

Publisher
Springer International Publishing
Copyright
© Springer International Publishing Switzerland 2016
ISBN
978-3-319-26483-7
Pages
555 –572
DOI
10.1007/978-3-319-26485-1_33
Publisher site
See Chapter on Publisher Site

Abstract

[There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.]

Published: Jun 8, 2016

Keywords: Artificial intelligence; AI; Machine intelligence; Future of AI; Progress; Superintelligence; Singularity; Intelligence explosion; Humanity; Opinion poll; Expert opinion

There are no references for this article.