Title: Can Intelligence Explode?

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. After a short introduction to this intriguing potential future, I will elaborate on what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

Video of lecture:

Slides (pdf): http://www.hutter1.net/publ/ssingularity.pdf
Slides (PowerPoint): http://www.hutter1.net/publ/ssingularity.ppsx
Paper: M.Hutter, Can Intelligence Explode, Journal of Consciousness Studies, Vol.19, Nr 1-2 (2012) pages 143–166.
http://www.hutter1.net/publ/singularity.pdf

Marcus Hutter is am Professor in the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra. Before he was with IDSIA in Switzerland and NICTA. His research at RSCS/ANU/NICTA/IDSIA is/was centered around Universal Artificial Intelligence, which is a mathematical top-down approach to AI, related to Kolmogorov complexity, algorithmic probability, universal Solomonoff induction, Occam’s razor, Levin search, sequential decision theory, dynamic programming, reinforcement learning, and rational agents. Generally Marcus is attracted by fundamental problems on the boundary between Science and Philosophy, which have a chance of being solved in my expected lifespan. One is Artificial Intelligence. The other is Particle Physics with questions related to physical Theories of Everything. Mathematics in its whole breadth (statistics, numerics, algebra, …) has become his constant and cherished companion.

5 Responses to “Marcus Hutter – Can Intelligence Explode?”

  1. Samantha Atkins says:

    I am afraid your discussion loses me as soon as you talk about any real infinite anything in finite time or space. There is no possibility whatsoever of such a logical impossibility occurring. Infinite anything cannot exist in a finite expanse of space-time. If this is a reasonable definition of ‘singularity’, which I do not agree it is, then clearly singularity is impossible.

    Singularity as usually meant does not require infinite intelligence in finite time at all. It only requires significantly greater than human artificial intelligence in order for humans to experience it as a “singularity” – a point beyond which they cannot see or comprehend clearly if at all. There never will be and never can be infinite intelligence unless it turns out to be somehow possible to transcend space-time completely. I am not sure it is possible even then as that state would be severely outside most mental constructs I could use for evaluation.

  2. Rob says:

    Please post the video!

  3. [...] Intelligence Explode?” is the title of a lecture and paper by artificial general intelligence (agi) researcher Marcus Hutter. From Hutter’s [...]

  4. robert says:

    I think a game development application. Also need to use knowledge in the AI. It is necessary to know about artificial intelligence. Thank you for article and video.

Leave a Reply



Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can
take care of it!