This is an abstract of a paper presented at the 2009 European Conference on Computing and Philosophy in Barcelona, Spain.
Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions
By Stephen Rayhawk, Anna Salamon, Thomas McCabe, Michael Anissimov, and Rolf Nelson
We introduce an interactive web application, the Uncertain Future, that uses structured probabilistic models to help users think through possible timelines for strong artificial intelligence. To date, there have been few to no efforts to approach the full, multifaceted problem of forecasting the potential development of strong AI using the best formal tools available. There has been serious forecasting work on individual AI-related aspects of the world, such as the future cost of computing power , the development of computer chess players , or the economic impact of robotic systems that substitute for human labor (reviewed in ). Long-range forecasts and models have generally had a narrow focus, such as trend-extrapolation models of “accelerating change”  or analyses of economic and power dynamics given strong AI  . The Uncertain Future is an early attempt toward presenting a single, combined model that integrates our best estimates about each of the factors and their possible causal interactions over time – including a formal probabilistic treatment of our uncertainties.
The present gap in modeling the trajectory of AI matters. A number of analysts have argued that: (a) there is a substantial chance that ‘human-level’ AI will be developed during this century; (b) human-level AI would have an impact at least comparable to such historical events as the appearance of sexual recombination, the oxygen transition, human language, or the industrial revolution     . If it is correct to assign any non-negligible probability that both propositions are true, then it is important to use the best available tools to model the relationship between near-term policy decisions and the possible outcomes .
Moreover, strong AI has several features which can be expected to limit the effectiveness of both qualitative scenario analysis by single experts and quantitative trend extrapolation. Prediction around strong AI involves unprecedented phenomena that are difficult to visualize, variables which can take on a wide range of values, large potential impacts that can create emotional biases in both individual judgment and community discourse, and a long timeline during which several background variables relevant to AI development can interact in unexpected ways. Simple quantitative trend extrapolation based on historical data may very easily break because of changes in context from the relevant periods. Detailed qualitative scenario analysis, meanwhile, faces two challenges. First, the many variables involved demand the consideration of great numbers of scenarios to capture the space of plausible outcomes, while the best-known futurists focus on only one. Second, even if the attempt to evaluate other scenarios is made, psychological research in heuristics and biases indicates that in complex domains with large unknowns, even domain experts will tend to attach excessive confidence to specific, easily visualizable scenarios . We do in fact see much published AI futurism confidently proclaiming the likelihood of specific future scenarios in cases where others confidently disagree. In policymaking, the characteristic result is neglect of the broad “everything else” category of events that could blindside us.
The Uncertain Future project is an experimental attempt at avoiding these pitfalls. The project has two faces:
1. As a future-projection tool, the Uncertain Future generates probability distributions over scenarios using the formalism of continuous-time Bayes nets . We freeze in a particular Bayes net model structure and use experts’ impressions to choose the parameters. This approach is similar to, but easier to make principled extensions within than, Trend Impact Analysis  or Cross Impact Analysis .
2. As an educational tool, the Uncertain Future project allows individuals to enter their own beliefs for each parameter (in place of the experts’ impressions) and to see the implications of their own causal beliefs, i.e. “the Socratic method meets modern probabilistic reasoning”.
Our system’s key features:
Your browser may not support display of this image. Separated belief-components: The Uncertain Future breaks participants’ beliefs about AI timelines into a number of relatively independent components. For example, it requires participants to separately specify their probability distributions for how long Moore’s law will continue, for the amount of computation required to model the brain, and for the possibility of nuclear war or other major societal disruption (see text box).
This helps participants focus separately on each major component of the world, including several background variables that might affect AI development and might not be part of participants’ ordinary views of the future (e.g., nuclear and other major disruptions, or intelligence augmentation of a sort that speeds science).
Probabilities, not “most likely” events: Participants enter each belief visually, with a simple point-and-click interface for specifying their probability distribution (Figure 1). All beliefs are entered as probability distributions; even if participants think a particular parameter value or narrow range of values “most likely”, they still must enter how likely, so that non-“mainline” sequences of events can be included in their picture. While this is standard practice in many areas of forecasting, it is not common in long-range AI futurism; for example, Kurzweil  outlines a specific range of future predictions, including timelines of AI development, but does not attach probabilities to the predicted ranges.
The combined use of probabilities and of separated belief-components should help participants move from single, easily visualized storylines about “how the future will go” to the broad range of scenarios in which one or more variables may turn in an unexpected direction. For many users, the user’s “mainline” scenario track turns out to have only a minority of their total probability; compounding of multiple probability distributions causes a wide range of Your browser may not support display of this image. Your browser may not support display of this image. future outcomes to emerge as a natural consequence.
Collated access to expert opinions, and to the belief-components of other participants: If a user, Bob, wants to think through AI futures, he can incorporate the risk of nuclear disruption without himself being knowledgeable about nuclear risks. Next to his probability-distribution entry box, he’ll find a list of relevant experts’ views on the size of nuclear risks over the relevant time-period; Bob can defer to expert consensus on this issue (which perhaps is not his specialty) and can then go on to enter his own, more thought-out parameter-values for belief-components for which he has background enough to reasonably disagree with the consensus.
Also, if Bob disagrees with Jane about AI, they can isolate the belief-components that underlie their disagreement and address them in particular. Science has made progress largely by reducing large, important problems to smaller and more manageable components that can be addressed with specific data and models; systems such as The Uncertain Future can help us to move between the sub-problems and complex whole.
The Uncertain Future is an early trial project, for which many simplifications were made. In the medium term we would like to capture additional parameters and effects, by building a platform for modular collaboration on futurist scenario projection and model-building. To this end, we wish to note that many of the probabilistic and quantitative methods used in futurism, including both trend extrapolation from historical data and strategic projection of future policies, can be naturally understood in a principled fashion as special cases or approximations within the formalism of continuous-time Bayes nets containing decision nodes. (This formalism is the unifying generalization of dynamic and continuous-time Bayes nets , continuous-time and partially observable Markov decision processes, Bayes decision nets, stochastic differential equations , and control theory, and an important special case of differential game theory.)
For example, both the historical curve-fitting used in the “surprise-free future” phase of trend impact analysis and the perturbations used in the “impact” phase can be understood together as an approximation to Bayesian inference under a stochastic-differential-equation model of the measured variable and the potential disrupting factors [16, section 3.2]. The Bayesian discipline of isolating causal dependencies permits model components and constraints from historical data to be added or modified in a modular fashion. While current computational methods and professional experience with this formalism are limited, we anticipate that using the formalism as a lingua franca for model-building will make it significantly easier to extend AI modeling to use more variables, datasets, and interactions in a principled and relatively error-tolerant manner.
1. T. Anderson, R. Fare, S. Grosskopf, L. Inman, X. Song, Further examination of Moore’s law with data envelopment analysis, Technological Forecasting & Social Change, 69(5) (2002) 465 477.
2. R. Kurzweil, The Age of Intelligent Machines, MIT Press, Cambridge, MA, 1990.
3. A. López Peláez, D. Kyriakou, Robots, genes, and bytes: technology development and social changes towards the year 2020, Technological Forecasting & Social Change, 75(8) (2008) 1176 1201.
4. R. Kurzweil, The Singularity is Near: When Humans Transcend Biology, Viking Press, 2005.
5. J.S. Hall, Engineering Utopia. In: P. Wang, B. Goertzel, & S. Franklin (Eds.), Artificial General Intelligence 2008: Proceedings of the First AGI Conference, IOS Press, 2008.
6. E. Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risks. In: N. Bostrom, M. Circovic (Eds.), Global Catastrophic Risks, Oxford University Press, New York, NY, 2008.
7. V. Vinge, Technological singularity, VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute (Mar.) (1993).
8. N. Bostrom, Ethical Issues in Advanced Artificial Intelligence. In: Smit et al. (Eds), Cognitive, Emotional, and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Institute of Advanced Studies in Systems Research and Cybernetics Vol. 2, Niagara Falls, Ontario, 2003.
9. A. Sandberg and N. Bostrom, Whole Brain Emulation: A Roadmap, Technical report, Future of Humanity Institute, Oxford, 2008.
10. R. Hanson, Long term growth as a sequence of exponential modes, Journal of Economic and Behavior Organization, in process.
11. J. Matheny, Reducing the risk of human extinction, Risk Analysis, 27(5), (2007) 1335 1344.
12. A. Tversky, and D. Kahneman, Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment. In: Gilovich, T., Griffin, D., and D, Kahneman, (Eds.), Heuristics and Biases: the Psychology of Intuitive Judgment, 2002.
13. U. Nodelman, Continuous time Bayesian networks, Stanford University, 2007.
14. N. Agami, A. Omran, M. Saleh, H. El-Shishiny, An enhanced approach for Trend Impact Analysis, Technological Forecasting and Social Change, 75(9) (2008) 1439 1450.
15. S. Asan, U. Asan, Qualitative cross-impact analysis with time consideration, Technological Forecasting and Social Change, 74(5) (2007) 627 644.
16. S. Särkkä, Recursive Bayesian inference on stochastic differential equations, Helsinki University of Technology Laboratory of Computational Engineering Publications, Report B54, (2006).