Singularity FAQ: Implementation

Q1). Don’t most scientists and engineers think that we’re nowhere near building an AI?

A1). It is true that the field of “Artificial Intelligence” has made only moderate progress in the past thirty years. But all the building blocks of human intelligence theory have been quietly falling into place. Cognitive science has made huge strides. Bayesian information theory has been popularized. Evolutionary psychology has continued to make progress. All of these fields are vital to true Artificial Intelligence, but their progress has gone mostly unnoticed, except by academic specialists.

Q2). Can’t computers only do what they’re programmed to do?

A2). Even simple programs can produce surprising, novel, apparently unexplainable results. We still don’t fully understand the behavior of a number of five-state Turing machines, let alone more complex systems; see this website on Busy Beaver functions. To say that we “programmed a computer” to do something is silly, if we’ve studied the computer for a decade, and we still can’t figure out what we’ve supposedly programmed it to do.

Q3). Isn’t the human brain analog, unlike computers, which are digital systems?

A3). A digital computer can simulate an analog system to an arbitrarily high level of accuracy. One kilobyte of digital data is enough for over two thousand digits of analog precision. In addition, neural firing is actually digital: there are only two states, firing and not firing, and nothing in between.

Q4). Doesn’t Godel’s Theorem show that no computer, or mathematical system, can match human reasoning?

A4). Humans are also subject to Godel’s Theorem. We can’t prove the statement “G cannot be proven”, either; this does not stop us from being intelligent and having a tremendous impact on the state of the Earth.

Q5). Isn’t it impossible to make something more intelligent/complex than yourself?

A5). Evolution’s algorithm is extremely simple, but evolution has created creatures of fantastic complexity, including us.

Q6). What if creating an AI, even if it’s possible in theory, is far too complex for human programmers?

A6). It is impossible to say how an AI’s complexity will relate to its intelligence, because a precise theory of intelligence has yet to be written. In addition, it seems likely that there are many different ways to build an intelligence, and each may scale differently relative to complexity- some may be able to grow exponentially with a linear increase in complexity, some not.

Q7). Isn’t true AI impossible, since you can’t program a computer to be prepared for every eventuality?

A7). With good programming and enough memory, an AI can handle an arbitrarily large number of circumstances without having been programmed for each and every one of them in advance, just like humans do. Instead of specific case-by-case logic, evolution has equipped us with different general-purpose algorithms, such as ones enabling us to easily and effortlessly read the expressions of others, and an AI can be designed with far more algorithms than we humans have. For instance, there is nothing in our genes, or in any computer system, specifying the answer to the question “What is 781,681,501,591,124 times 681,156,581,178,418?”, yet both humans and computers can answer this question, because we have a general-purpose algorithm for doing multiplication.

Q8). We still don’t have the technological/scientific prerequisites for building AGI. If we want to build it, shouldn’t we develop these first, instead of funding AGI directly?

A8). Any necessary prerequisites can be funded by the AGI project directly, instead of waiting around for third parties to do the science and engineering for us. We still don’t know what these prerequisites are, so at a minimum, the field still needs to be investigated until we can determine where to go next.

Q9). Since FAI theory isn’t tested, wouldn’t the only way to know if it works be actually building an AGI?

A9). Several theory components, such as recursive decision theory, can be tested on much less complex systems. It is true, however, that as the system gets more and more complex, testing will become more and more difficult.

Q10). Won’t any true intelligence will require a biological substrate?

A10). Biological substrates are made of atoms, which are simulatable on any Turing-equivalent computer with enough time and RAM.

Q11). What if we can’t reach the levels of computing power needed to equal the brain using currently existing hardware paradigms?

A11). IBM’s BlueGene already exceeds many estimates of the human brain’s computing power. With nanotechnology, we should be able to get 10^20 FLOPS on a desktop computer.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s