Singularity FAQ: Desirability

Q1). Why would anyone want to build a superhuman AI?

A1). A superintelligent AI, by definition, would be able to do anything faster than any human can. Hence, no matter what you want to do, superintelligent AI can help you do it better. No matter what your goals are- love, money, power, happiness, or building a five-hundred-foot tall cheesecake- a superintelligent AI would be of enormous assistance in achieving them.

Q2). What if a Singularity through uploading, or brain-computer interfaces, is more feasible or desirable?

A2). We currently think that AI is the most likely, and most powerful, source for a Singularity over the next century, so at the present time, most of our efforts are concentrated on AI. However, if uploading, brain emulation, or other technologies prove to be better, safer or easier to implement, we will change our focus and pay more attention to them.

Q3). What would be the meaning of life in a universe with AI or advanced nanotechnology, where we could have whatever we wanted without having to struggle?

A3). If we wanted to, we could always choose not to use advanced technologies, or just keep them running in the background to protect us from asteroids and what not. There are already societies, like the Amish, that choose not to use certain technologies even though they are available, because they have decided that the costs outweigh the benefits.

Q4). Wouldn’t an AI be just like Skynet from Terminator, or the machines from the Matrix?

A4). Science fiction is entertainment, not an actual prediction of how things will turn out. You can’t generalize from fictional evidence. Sci-fi in general, even “hard” sci-fi, suffers from two massive biases. In the first place, you have good-story bias. David vs. Goliath is a good story, while Goliath vs. a flea on his shoulder is not. Scenarios where an AI instantly crushes mankind, or creates a truly perfect (non-flawed) utopia, will never make it to Hollywood no matter how likely they are, because they would make boring movies. The second is less obvious, but just as important- most science fiction authors and directors are notreally trying to write about AI. Terminator is not really about AI, it is about the rise of the military-industrial complex. The Matrix is not really about AI, it is about conformity with oppressive social structures. What happens is, AI gets used as a metaphor for other things that the author thinks are more important or interesting. Hence, even if the authors were genuinely interested in making good predictions about AI, they wouldn’t show up in movies anyway.

Q5). Technology has given us nuclear bombs and industrial slums, so why should we as a species keep developing more technology?

A5). Because of the positive effects of technology, the average quality of life is much, much better than it was a thousand years ago. If we wanted to, we could stop all scientific and engineering research tomorrow, and then throw out all of our computers and cellphones. We choose not to, because we know that technology improves our lives.

Q6). What if we live in a computer simulation, and it’s too computationally expensive for our simulators to simulate our world post-Singularity?

A6). This scenario can be used to argue for, or against, any idea or scenario regardless of what it is. For literally any value of X, you can say “What if the simulators killed us if we did X?”, or “What if the simulators killed us if we didn’t do X?”. Hence, this argument cannot help you decide between X and Y, because it supports and condemns X and Y equally; you might as well flip a coin.

Q7). Isn’t curing cancer a higher priority than developing AI, which might take hundreds or thousands of years?

A7). Developing AI could actually wind up being easier than curing cancer, at least in terms of money and man-hours involved. We’ve already thrown over a hundred billion dollars at cancer research, which is a huge amount of money and effort compared to any general AI project. And the impact of AI is huge- it could not only cure cancer, but it could also cure every disease known to humankind, as well as solving a whole bunch of other problems.

Q8). If we unraveled the mystery of intelligence, wouldn’t it demean the value of human uniqueness?

A8). There is an ongoing, general trend in science for new research to make humans seem less special. For instance, Copernicus realized that our planet was not the center of the universe, while Darwin realized that humans were another species of primate instead of their own special category. However, with hindsight, we still see these advances as good things; few people would choose to stop believing that the Earth revolved around the Sun.

Q9). If this really is important, wouldn’t someone else would already be working on it?

A9). Every great idea was passed over thousands of times before someone got around to working on it. For instance, every new startup company depends on the principle that an idea can be good, and yet not taken by someone else. And startups are now one of the main drivers of our economy- all five of the Internet’s most-visited websites were originally startup companies.

Q10). Singularity utopias are all written in an elite Western intellectual culture. What if a Singularity and machines taking over threatened the diversity of other forms of thought, such as religion and less technology-based cultures?

A10). Since the Internet became widespread, we have seen an explosion of different subcultures, including religious and even anti-technology ones- better communication networks allow ideas to spread faster, and a Singularity is likely to spread ideas even more effectively. Furthermore, a Friendly AI, build using a model such as CEV, would preserve existing cultures to the extent that humans ultimately wish to see them preserved.

Q11). A post-Singularity mankind won’t be anything like the humanity we know, regardless of whether it’s a positive or negative Singularity. So, wouldn’t it be irrelevant whether we get a positive or negative Singularity?

A11). Most possible AIs don’t create a future that looks strange and incomprehensible- they build a future that looks boring, one that doesn’t contain anything that humans value. For instance, an AI programmed to value smiling faces will turn all matter in the universe into the smallest object it can think of that looks like a smiley face, leaving nothing behind that humans would find interesting.

Q12). Isn’t it unethical to build AIs as willing slaves?

A12). There are two parts to this objection. For one, it could be argued that it’s unethical to restrict a mind’s freedom of choice. But, if you have the freedom to build a mind with an arbitrary set of desires, what level of uncertainty would need to be incorporated before the programmed choice no longer was a programmed choice? Would it have a true choice if you estimated that it chose things in a certain way 90% of the time? 70%? 50%? Is it only ethical to craft minds as long as you are lousy in the art of mindcraft, and don’t even know how to estimate those probabilities? That would be saying that it’s only ethical to build minds when you have no clue what the minds will do to their environment and others. That wouldn’t be ethical- that would be criminally irresponsible.

It could be suggested that it’d be more ethical to simply treat the created AI well, so that it would find the choice of helping humanity attractive. But that argument only works if you can only build a certain kind of mind- for instance, if you can only build very human-like minds. When you are free to define all of a mind’s preferences, what’s the difference between making it an attractive option to assist humans and programming it into a certain decision? We easily think that certain things are more “natural” for minds to prefer than others, because our we have evolved to consider them inherently natural. But ultimately there’s no reason for why it’d be right or wrong to make a mind prefer a certain sort of treatment over the other, or why it’d be right or wrong to make a mind prefer acting in certain ways.

Q13). We can’t suffer if we’re dead, so why should we care if an AI wipes us all out?

A13). It seems very plausible that AIs wiping out humanity would cause immense suffering during the process. Furthermore, it would be a horrible waste to let humanity be destroyed, when a positive Singularity could create a utopian future with no non-voluntary suffering at all.

Q14). Don’t we want humanity to be in charge of its own destiny, instead of a bunch of machines?

A14). We should build AIs to implement a program such as CEV, that helps humans take charge of their destiny.

Q15). Wouldn’t a perfectly Friendly AI would do everything for us, making life boring and not worth living?

A15). An AI that would make life boring and not worth living would, by definition, not be perfectly Friendly. If there is some optimal level of adversity that humans need in order to thrive, then a perfectly Friendly AI would create a world where everybody faced that optimal level- assuming that humans didn’t want to modify their psyches to require a different level of adversity.

Q16). What problems that could possibly be solved through AGI/MNT/the Singularity that would be worth the extreme existential risk incurred through developing the relevant technology?

A16). The Singularity will eventually be triggered anyway- we aren’t aiming to trigger it as fast as possible, we’re aiming to trigger it as safely as possible.

Q17). What if an AI that’s friendly to humans ignores the desires of other sentients, such as uploads, robots, aliens, or animals?

A17). Preferably, Friendly AIs would be built to be Friendly towards all sentient life. If humans wanted to take the desires of other sentients into account, and a “Friendly” AI didn’t, then said AI wouldn’t really be Friendly.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s