Singularity FAQ: Consciousness

Q1). Isn’t consciousness necessary for true AI?

A1). Assuming that humans are conscious, then it would indeed require conscious AI to fully duplicate a human. However, since we don’t understand intelligence all that well yet, it would be very presumptuous of us to say that consciousness is necessary for intelligence. It would be like a chimpanzee thinking that all primates must live in trees.

Q2). What if computation isn’t a sufficient prerequisite for consciousness?

A2). Every atom in the human brain obeys the laws of physics. These laws are very well understood, and can be modeled on any Turing-complete computer system with enough RAM and processing power. With sufficient resolution, you could simulate the entire brain this way, atom-by-atom. We know that the real atoms and the simulated atoms behave identically; the real person and the simulated person should, therefore, also behave identically (allowing for quantum randomness). Hence, as long as no supernatural or spiritual elements are involved in consciousness, it must be possible to build a conscious entity inside a computer as long as the computer is powerful enough.

Q3). Don’t many philosophers think that computers can never really understand the world the way humans can, like Searle’s Chinese Room argument?

A3). This idea is mainly the result of previous, abandoned AI projects, where (for example), a cow was represented in the model by a single string variable, “COW”. Obviously, just using the word “COW” isn’t going to make the computer really understand the full range of experiences we associate with real cows, any more than just saying the German word “Uhr” (clock) would make an English speaker understand what a clock was. However, this problem is specific to old-fashioned AI systems, and it does not AIs or computers in general.

Q4). What if human consciousness requires quantum computing, and no conventional computer could match the human brain?

A4). Human neurons have been the subject of an immense amount of research, and so far, there’s no evidence for any kind of quantum computation within the brain. A quantum computer needs to be kept isolated from any disturbances to avoid wave function collapse, and any atom in the brain is bombarded constantly by photons and other atoms; the brain is an extremely messy place on the atomic and molecular level. However, even if consciousness requires quantum computing, it doesn’t mean that AI is impossible- it just means that we have to build a quantum computer to run it on.

Q5). What if human consciousness requires holonomic properties?

A5). This is still a fringe hypothesis, and goes against the bulk of currently-accepted neuroscience.

Q6). Doesn’t everyone know that a brain isn’t enough for an intelligent mind, and you also need a body, emotions, and a society of other people?.

A6). Humans need a body, emotions, and society to function, but there’s no real reason an AI would need them. AIs and humans are vastly different from each other, and what applies to one doesn’t automatically translate to the other. Even an AI did need a body, there are plenty of simulated environments where something like a body could be provided. AIs will also automatically have a society of sorts, by virtue of interacting with the researchers building them.

Q7). Don’t philosophers think that consciousness, as a purely subjective experience, can’t be studied in a reductionist/outside way, nor can its presence be verified?

A7). As in quantum mechanics, we don’t need to be concerned with unverifiable philosophies. What we can verify, and do science with, is that human intelligence has had a huge impact on the world, and any increase in the level of available intelligence will have huge consequences for the human species.

Q8). Wouldn’t a computer lack human intuition, causing it to be much less capable in many situations?

A8). Human intuition, although it may seem mysterious to us, is based on a physical network of subconscious memories and observations. This network has been studied by cognitive scientists, and there’s no reason why it couldn’t be programmed in, just like the more logical and deliberative reasoning-based parts of the brain.

Q9). If we do not properly understand feelings and qualia, couldn’t we accidentally cause our AI systems to suffer immensely when they were being developed?

A9). It is true that we could cause immense suffering if we screw up. However, just because the problem is difficult does not automatically exempt us from having to solve it. Nuclear disarmament is a difficult problem too, but no one would ever suggest that we should stop working on it, just because it was risky and hard.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s