Stuart talks about how prediction, how we predict for AI, and how we keep failing – also how to make better predictions. Strong AI will probably be here this century.
Also see his talk at Winter Intelligence:
Stuart Armstrong’s research at the Future of Humanity Institute centres on formal decision theory, general existential risk, the risks and possibilities of Artificial Intelligence (AI), assessing expertise and predictions, and anthropic (self-locating) probability.
How We’re Predicting AI—or Failing To:
This paper will look at the various predictions that have been made about AI and pro-pose decomposition schemas for analyzing them. It will propose a variety of theoretical tools for analyzing, judging, and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are borne out in practice: expert predictions contradict each other considerably,
and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.
Many thanks for watching!
Consider supporting me by:
a) Subscribing to my YouTube channel:
b) Donating via Patreon: and/or
c) Sharing the media I create
– Science, Technology & the Future: