Title: Can Intelligence Explode?
Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.
Video of lecture:
Slides (pdf): http://www.hutter1.net/publ/ss
Slides (PowerPoint): http://www.hutter1.net/publ/ss
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
Marcus Hutter is am Professor in the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra. Before he was with IDSIA in Switzerland and NICTA. His research at RSCS/ANU/NICTA/IDSIA is/was centered around Universal Artificial Intelligence, which is a mathematical top-down approach to AI, related to Kolmogorov complexity, algorithmic probability, universal Solomonoff induction, Occam’s razor, Levin search, sequential decision theory, dynamic programming, reinforcement learning, and rational agents. Generally Marcus is attracted by fundamental problems on the boundary between Science and Philosophy, which have a chance of being solved in my expected lifespan. One is Artificial Intelligence. The other is Particle Physics with questions related to physical Theories of Everything. Mathematics in its whole breadth (statistics, numerics, algebra, …) has become his constant and cherished companion.