Singularity FAQ: The Intelligence Explosion

Q1). Wouldn’t accelerating growth run up against hard limits set by the environment, or by the laws of physics?

A1). Two of the three primary ideas around the Singularity– the Event Horizon and Intelligence Explosion- don’t require the sort of exponential growth that Ray Kurzweil and Gordon Moore describe. All you need for either of them to happen is a single intelligence which is both smarter than humans are, and can reprogram itself. It doesn’t matter if technological progress is accelerating, proceeding at a constant rate, slowing down or even standing still.

As for the third major Singularity idea, Accelerating Change, Ray Kurzweil and others have pointed out that infinite growth isn’t necessary; the potential for finite growth is more than enough. Even if we only consider the Earth, we have 6,000,000,000,000,000,000,000,000 kilograms of available matter to work with, almost none of which has ever been used for anything. The ultimate limits on computing power are equally enormous, even if we don’t consider future technologies like quantum computing or reversible computing. For every bit of data crunched, at room temperature, Landauer’s principle requires that 2.9 * 10-21 joules of energy be expended, and the Earth receives a steady energy flow of about 122 PW, or 122,000,000,000,000,000 joules per second, from the Sun. Hence, the total amount of computing power that the Earth can support is around 10^36 FLOPS, or around one hundred billion billion times larger than the processing power of the human brain.

Q2). Shouldn’t the “exponential spiral” of an intelligence explosion quickly wind down, because each AI is more complex, and thus harder to reprogram, than the previous AI?

A2). This is, indeed, one possible scenario. However, prudence demands that we also consider the worst case scenario, not just the best case. It could be the case that an intelligence explosion peters out quickly, but how would we know that ahead of time? The only way to test it is by going ahead and creating an intelligence explosion, because there’s no way to know before you actually do it. And we would be, not just foolish, but the most incompetent engineers in all of human history, if we closed our eyes and left the future of humanity to guesswork and naive optimism.

The evidence from evolution also suggests that each additional increment of intelligence is easier to make, not harder. It took evolution hundreds of millions of years to go from very primitive neural networks to the more complicated neural networks of amphibians and reptiles (a gap that took humans about fifty years). By contrast, going from chimpanzee-level intelligence to human-level intelligence only took five million years, around 1% as long.

Q3). Wouldn’t super-fast AI require immense amounts of electricity, both in direct electricity consumption, and in the power necessary to keep it cooled?

A3). As technology improves, the power required to achieve more computing power will drop. Power consumption per FLOP has improved over time with Moore’s Law, like every other way of measuring the performance of computer technology (see Ray Kurzweil’s article The Law of Accelerating Returns). The first computers, like ENIAC, had to have their own dedicated power lines, while a modern laptop consumes no more than a few watts. Hence, given sufficient progress in electronics, we should eventually get close to the limits of what it’s physically possible to do, and there are both theoretical and practical grounds for believing that these limits are not very problematic.

On the theoretical side, even without new technologies such as quantum and reversible computing, it only requires a tiny amount of heat dissipation to get rid of the entropy that computation generates. The theoretical, lower bound for the heat dissipation of a computer the size of the human brain running at room temperature is less than a tenth of a milliwatt, far too small to be worth worrying about. At that efficiency, you could run a superintelligence with a thousand times the raw processing power of the brain on a single watch battery.

And on the practical side, we already know from biology that it’s possible to build very efficient computers. A human brain dissipates only around twenty watts of heat, less than that of a single light bulb, and this is probably far from the best we can do. Crows, for instance, have been shown to be quite intelligent by the standards of the animal world, despite having far smaller brains than humans, chimpanzees or dolphins.

Q4). If we look at biology, we see no reason to believe in any kind of “intelligence explosion”. Intelligence increased relatively smoothly from primitive brains, through reptiles, mammals, chimps, and early hominids to humans. Why should an intelligence explosion happen now, if it hasn’t happened in the past hundred million years?

A4). Even small increases in intelligence, of the sort that there are biological precedents for, can have a huge impact on the world. The human brain is about three times larger than the chimpanzee brain, relative to body size, and has about two times the frontal cortex area, for a total sixfold advantage. This sixfold advantage in raw processing power is much less than the speedup we can get by building generally intelligent computers; computers have gotten six thousandtimes more powerful in just the last fifteen years. Yet, this sixfold advantage gave humanity agriculture, civilization, industry, and complete domination over all other species, while the chimpanzees were never even a serious contender.

Q5). What if there is a fundamental limit on intelligence, somewhere close to or only slightly above the human level?

A5). Evolution is a messy and inefficient designer, and so there is every reason to think that we can do better. For example, only mutations with an immediate effect on an organism’s fitness are likely to spread in a population, and even useful mutations may disappear due to random chance, if too few organisms have them. Furthermore, evolution cannot backtrack and rebuild an already evolved mechanism in a more efficient way if other mechanisms have already developed on top of the existing one, because changing the inefficient mechanism would then break all the systems relying on it. For instance, the molecular machinery of ATP synthase is essentially the same in animal mitochondria, plant chloroplasts, and bacteria; ATP synthase has not changed significantly since the rise of eukaryotic life two billion years ago, because changing it would break all of the biological machinery that depends on ATP synthase.

On the other hand, human engineers are capable of building countless of things that could never have evolved naturally, from sending rockets to the moon, to building machines that can communicate nearly instantaneously between opposite sides of the globe. Given that we have seen, time and time again, how humans can surpass evolution, it seems very unlikely that evolution would have been able to create the best intelligence that it’s physically possible to build. It’s much more likely that evolution has only reached the lowest-hanging fruit: the forms of intelligence that are the easiest to implement, and that have the largest immediate payoff in terms of reproductive fitness.

We can already see, through the work of cognitive scientists, many ways in which the human design might be improved. For instance, in humans, working memory capacity has been found to correlate highly with successful performance in a number of cognitive tasks. Simply increasing the memory capacity of an AI system which is already human-equivalent might very well make it more effective and intelligent, as it can consider the effects of more variables in a single model. Similarly, simply giving it more processing power can enable it to think more and learn more in a shorter time, which is a major bottleneck for human scholars who don’t have the time to learn all but a tiny fraction of humanity’s accumulated knowledge. While scaling laws might prevent an AI’s memory and processing power from simply increasing indefinitely, there is no sound basis for assuming that the human level of intelligence is anywhere near the ultimate maximum.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s