Singularity FAQ: Miscellaneous

Q1). Isn’t humans building true, conscious AI against the will of God?

A1). Nowhere, in any holy text, is the creation of intelligent life forbidden. If God, Yahweh, Allah, or any other deity had wanted to forbid it, they certainly could have written that it was forbidden. Indeed, this would have helped to convince modern people of the truth of their religion, for how could primitive societies more than a thousand years ago have known that humans would one day be able to create intelligent life without divine assistance?

Q2). Isn’t creating new minds playing God, since only God had the authority to create life and consciousness?

A2). Traditional Abrahamic theology suggests that we are meant to try and follow in God’s footsteps, although we can never actually be God. The Bible/Talmud says, “Then God said, Let us make man in our image … in the image of God He created him” (Genesis 1:26).

Q3). Wouldn’t computers lack souls?

A3). The AI having a soul isn’t a requirement for a successful Singularity.

Q4). Haven’t scientists been talking about how AI has supposedly been around the corner for 20 years now?

A4). Most of the people who work in “AI” are focused on small, narrow facets of intelligence, such as chess playing or video gaming. There has actually been quite a bit of success in this field, such as the chess-playing AI Deep Blue (among many others), but as soon as a success is achieved, people simply jump to the next milestone and say “but we haven’t done that yet!”. This cartoon is a humorous take on this phenomenon.

Q5). Since true AI hasn’t even gotten off the ground yet, isn’t it a little too early to start thinking about Friendly AI?

A5). The “it is too early to worry about the dangers of AI” argument has some merit, but as Eliezer Yudkowsky notes, there was very little discussion about the dangers of AI even back when researchers thought it was just around the corner. What is needed is a mindset of caution- a way of thinking that makes safety issues the first priority, and which is shared by all researchers working on AI. A mindset like that does not spontaneously appear- it takes either decades of careful cultivation, or sudden catastrophes that shock people into realizing the dangers. Environmental activists have been talking about the dangers of climate change for decades now, but they are only now starting to get taken seriously. Soviet engineers obviously did not have a mindset of caution when they designed the Chernobyl power plant, nor did its operators when they started the fateful experiment. Most AI researchers do not have a mindset of caution that makes them consider thrice every detail of their system architectures- or even make them realize that there are dangers. If active discussion is postponed to the moment when AI is starting to become a real threat, then it will be too late to foster that mindset.

There is also the issue of our current awareness of risks influencing our AI engineering techniques. Investors who have only been told of the promises of AI research, and not of the risks, are likely to pressure the researchers to pursue progress at any means available, even if it is dangerous. And if the original researchers are aware of the risks and refuse to do so, the investors might hire other researchers who are less aware of them. To quote Artificial Intelligence as a Positive and Negative Factor in Global Risk:

“The field of AI has techniques, such as neural networks and evolutionary programming, which have grown in power with the slow tweaking of decades. But neural networks are opaque- the user has no idea how the neural net is making its decisions- and cannot easily be rendered un-opaque; the people who invented and polished neural networks were not thinking about the long-term problems of Friendly AI. Evolutionary programming (EP) is stochastic, and does not precisely preserve the optimization target in the generated code; EP gives you code that does what you ask, most of the time, under the tested circumstances, but the code may also do something else on the side. EP is a powerful, still maturing technique that is intrinsically unsuited to the demands of Friendly AI. Friendly AI, as I have proposed it, requires repeated cycles of recursive self-improvement that precisely preserve a stable optimization target.

The most powerful current AI techniques, as they were developed and then polished and improved over time, have basic incompatibilities with the requirements of Friendly AI as I currently see them. The Y2K problem- which proved very expensive to fix, though not global-catastrophic- analogously arose from failing to foresee tomorrow’s design requirements. The nightmare scenario is that we find ourselves stuck with a catalog of mature, powerful, publicly available AI techniques which combine to yield non-Friendly AI, but which cannot be used to build Friendly AI without redoing the last three decades of AI work from scratch.”

 

Q6). Why should we have to worry about Friendliness? Development towards AI will be gradual, and methods will pop up to deal with it, like the development of electricity, tanks, cannons, and so on.

 

A6). Unfortunately, it is by no means not a given that society will have the time to adapt to artificial intelligences. Once a roughly-human level intelligence has been reached, there are many ways for an AI to become vastly more intelligent (and thus more powerful) than humans in a very short time:

Hardware increase/speed-up: Once a certain amount of hardware has human-equivalence, it may be possible to make it faster by simply adding more hardware. While the increase isn’t necessarily linear- many systems need to spend an increasing fraction of resources to managing overhead, as the scale involved increases- it is daunting to imagine a mind which is human-equivalent, then has five times as many extra processors and memory added on. AIs might also be capable of increasing the general speed of development- the essay Staring into the Singularity includes a hypothetical scenario with technological development being done by AIs, which themselves double in (hardware) speed every two years- two subjective years, which shorten as their speed goes up. A Model-1 AI takes two years to develop the Model-2 AI, which takes takes a year to develop the Model-3 AI, which takes six months to develop the Model-4 AI, which takes three months to develop the Model-5 AI…

Instant reproduction: An AI can “create offspring” very fast, by simply copying itself to any system to which it has access. Likewise, if the memories and knowledge obtained by the different AIs are in an easily transferable format, they can simply be copied, enabling computer systems to learn immense amounts of information in an instant.

Software self-improvement: This involves the computer studying itself and applying its intelligence to modifying itself to become more intelligent, then using that improved intelligence to modify itself further. An AI could make itself more intelligent by, for instance, studying its learning algorithms for signs of bias and improving them with better ones, developing ways for more effective management of its working memory, or creating entirely new program modules for handling particular tasks. Each round of improvement would make the AI smarter and accelerate continued self-improvement. An early, primitive example of this sort of capability was EURISKO, a computer program composed of different heuristics (rules of thumb) which it used for learning and for creating and modifying its own heuristics. Having been fed hundreds of pages of rules for the Traveller science fiction wargame, EURISKO began running simulated battles between different fleets of its own design, abstracting useful principles into new heuristics and modifying old ones with the help of its creator. When EURISKO was eventually entered into a tournament, the fleet of its design won the contest single-handedly. In response, the organizers of the contest revised the rules, releasing the new set of them only a short time before the next contest. According to the creator of the program, Douglas Lenat, the original EURISKO would not have had the time to design a new fleet in such a short time- but now it had learned enough general-purpose heuristics from the first contest that it could build a fleet that won the contest, even with the modified rules.

And it is much easier to improve a purely digital entity than it is to improve human beings: an electronic being can be built in a modular fashion and have bits of it re-written from scratch. The minds of human beings are evolved to be hopelessly interdependent, and are so fragile that they easily develop numerous traumas and disorders, even without outside tampering.

Q7). People evolved from selfish self-replicators, and we still developed things like love, trust, empathy and altruism. Hence, if AIs are evolved, wouldn’t AIs automatically become Friendly from evolutionary selection pressures?

A7). Most humans, when placed in positions of power, are not Friendly in the FAI sense. History is rife with abuse of power; just look at Hitler, Stalin, and Mao, who led large portions of the world for decades. Additionally, directed evolution on a computer is not likely to resemble human evolution; most of the selection pressures which caused humans to develop morality will be absent or distorted.

Q8). Isn’t trying to build Friendly AI is pointless, as a Singularity is by definition beyond human understanding and control?

A8). Once the AI is built, and it achieves superintelligence, the things that it does will indeed be beyond human understanding and control. The key is to program the AI ahead of time, so that we know that the actions it will take are beneficial to humanity, rather than harmful.

Q9). If Unfriendly AI is much easier than Friendly AI, aren’t we going to be destroyed regardless?

A9). There’s no point in giving up on the future of humanity, just because things might seem bleak. Even if the problem is difficult- as indeed it is- it is still our responsibility to confront it.

Q10). Other technologies, such as nanotechnology and bioengineering, are much easier than FAI, and they have no “Friendly” equivalent that could prevent them from being used to destroy humanity. So, what if we’re all just doomed?

A10). Again, even if the problem is difficult- as indeed it is- it is still our responsibility to confront it.. And, if we achieve FAI first, then the risks from the other technologies will be mitigated, as well.

Q11). Won’t talking about possible dangers make people much less willing to fund needed AI research?

A11). Funding AI research without considering the dangers is much worse than AI research being delayed. Nick Bostrom comments on this in his articleAstronomical Waste; if research is delayed by twenty years, we lose twenty years of future civilization, but if something bad goes wrong, we lose twenty billion years of future civilization.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s