Singularity FAQ: Alternatives to Friendly AI

Q1). Couldn’t AIs be built as pure advisors, so they wouldn’t do anything themselves?

A1). The problem with this argument is the inherent slowness in all human activity- AI activities can be much more efficient if you can cut humans out of the loop, if the system can carry out decisions and formulate objectives on its own. Consider, for instance, two competing corporations (or nations), each with their own advisor AI that only carries out the missions it is given. Even if the advisor was the one collecting all the information for the humans (a dangerous situation in itself), the humans would have to spend time making the actual decisions about how the AI should act in response to that information. If the competitor had turned over all the control to their own, independently acting AI, it could react much faster than the one that relied on the humans to give all the assignments. Therefore, there would be an immense temptation to build an AI that could act without human intervention.

Also, there are numerous people who would want an independently acting AI, for the simple reason that an AI built only to carry out goals given to it by humans could be used for vast harm- while an AI built to actually care for humanity could act in humanity’s best interests, in a neutral and bias-free fashion. Therefore, in either case, the motivation to build independently-acting AIs is there- and the cheaper computing power becomes, the easier it will be for even small groups to build AIs.

It doesn’t matter if an AI’s Friendliness could trivially be guaranteed by giving it a piece of electronic cheese, if nobody cares about Friendliness enough to think about giving it some cheese, or if giving the cheese costs too much in terms of what you could achieve otherwise. Any procedures which rely on handicapping an AI enough to make it powerless also handicap it enough to severely restrict its usefulness to most potential funders. Eventually, there will be somebody who chooses not to handicap their own AI, and then the guaranteed-to-be-harmless AI will end up dominated by the more powerful AI.

Q2). Wouldn’t a human upload naturally be more Friendly than any AI?

A2). Humans have a known tendency to become corrupt when given power, especially the sort of unique, absolute power that an upload would have.

Q3). Trying to create a theory which absolutely guarantees Friendly AI is an unrealistic, extremely difficult goal, so isn’t it a better idea to attempt to create a theory of “probably Friendly AI”?

A3). There is no such thing as “probably friendly AI”; most of the work done in building a theory of FAI is to make it so that the AI has some non-infinitesimal chance of being Friendly. If the AI you’ve build has even an 0.1% chance of being Friendly, most of the work is already done.

Q4). Shouldn’t we work on building a transparent society, where no illicit AI development can be carried out?

A4). Transparent societies carry their own problems, such as dictatorial governments being able to enforce much more rigid rules on the populace. Additionally, current human institutions couldn’t possibly monitor the planet for anyone building a rogue AI, even if they were given the power to see what anyone in the world was doing instantaneously. There are already a number of known, potentially dangerous AI projects out there, and human governments are doing nothing to stop them.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s