Q1). Even if an AI is smarter than us, wouldn’t it still need money/power/political support/human cooperation to achieve its goals?
A1). Looking at early humans, one wouldn’t have expected them to rise to a dominant position. Lots of other animals had things like poison spikes, sharp teeth, excellent eyes and acute hearing, while humans had no extraordinary physical capabilities. All humans had was greater intelligence- the ability to think better than lions, tigers and bears. History records who won. Nowadays, many formerly dominant animals, like elephants, are on the verge of extinction because of human poachers. For more information on this, see SIAI Research Fellow Eliezer Yudkowsky’s essay The Power of Intelligence.
In addition, a sufficiently advanced AI would have a powerful weapon that we didn’t- the ability to persuade humans to do its bidding. On the African savanna, there was no way to persuade an animal to let you kill it because you wanted to eat the meat. On the other hand, the uncountably many historical incidents of bribery, cronyism and corruption shows that humans are very persuadable, if you’re smart enough to tell them what they want to hear. SIAI researcher Eliezer Yudkowsky’s famous AI box experiment shows that a sufficiently AI would probably be able to persuade humans to do what it wants, even if the humans are initially unwilling.
Q2). Won’t an AI still be bound by the laws of physics, even if it is really smart?
A2). Of course; intelligence is a way to manipulate physics, not a way to break it. However, this doesn’t mean that AI will be harmless, or not a very serious threat. There is nothing in the laws of physics that says that a hostile AI, or even something much more mundane (like an asteroid impact), can’t wipe out the human species. There’s also nothing in the laws of physics that says that a Friendly AI couldn’t solve all the world’s problems. Ultimately, the laws of physics are morally neutral- they are neither the friends of humanity, nor its enemies. It is up to us to learn how to use physics to create a world that we like, as opposed to a 1984-style dystopia, or a barren lump of ash.
Q3). Wouldn’t an AI still be limited by the need to conduct physical experiments, no matter how fast of a computer it runs on?
A3). Many experiments are increasingly being carried out as simulations, from protein-folding analysis to nuclear weapons testing. The main limitations on these experiments are: getting the information needed to set the initial simulation conditions, and obtaining enough computing power. The amount of available computing power is increasing exponentially, in accordance with Moore’s Law, and researchers are constantly increasing the amount of, and accuracy of, the information available for running simulations. And, of course, in some research fields, most notably mathematics and computer science, physical experiments aren’t necessary at all.
In addition, even when we do physical experiments, the process doesn’t consist entirely of performing physical procedures, and often, the procedures themselves only take a small part of the total time needed to test out new theories. Scientists think through dozens of potential experiments, before deciding on which ones would be the most useful- this could be done much faster by an AI. Interpretation of the experimental results often involves slow and labor-intensive data processing, which could be done more quickly by an AI. And computerized and robotized laboratory equipment has increased the speed of research by orders of magnitude in a number of areas, even when the experiments themselves are still run by human scientists. Given all of these areas where faster thinking and computing can make a difference, a million-fold acceleration in AI thinking might not accelerate all research by a million-fold, but it is still likely to accelerate it by an immense amount.
Q4). Why would anyone choose to place machines into positions of power?
A4). If we’re worried about placing machines into positions of power, it’s too late- the machines are already there. If, tomorrow, all the computers in the world shut down, the entire planet would be thrown into complete chaos. The basic utilities- things like electricity, cable, phone, and of course the Internet- would all fail, because they all run on computerized infrastructure. Most cars wouldn’t even start, because the microchips in them wouldn’t work. You couldn’t get paid, or use a credit card, or take money out of a bank. And the government and the military would be crippled, unable to communicate or take effective action, due to the lack of a command and control infrastructure. If we want to make ourselves less dependent on computers, AI should be the least of our concerns.