Singularity FAQ: General Questions

Q1). If AI is a serious threat, then wouldn’t the American government or some other official agency step in and take action, for fear of endangering national security?

A1). Governments often don’t see future technology as being that large of a risk. Government action largely depends on politics and what the voters want, and it’s much harder to run a campaign about things that might happen twenty or thirty years from now than things that are happening today, like poverty in Africa, or instability in the Middle East. For instance, when the idea of the nuclear chain reaction was first invented by Leo Szilard, the governments of the US and Britain didn’t take it very seriously. Szilard had to go to Einstein, the most famous scientist in the world, and ask him to write a letter to President Roosevelt to get the political and military backing he needed.

In addition, governments usually pay more attention to things that are widely known or discussed in the news than things which the popular media ignores. Only a tiny number of reporters think of rogue AI as being a serious concern, like nuclear war or global warming, and the problem has gotten very little coverage from the press in comparison to things like the economy or healthcare issues. Most government employees have never even heard of the “Singularity” or “molecular nanotechnology” or “existential risk” or “Unfriendly AI”.

Also, even when a problem is immediately obvious, it’s very common for no one with any power to notice, or to take action if they do notice. For instance, until recently, there were thousands of old, Soviet nuclear weapons in Russia, which were lightly guarded, or not even guarded at all. Any terrorist with a truck and a dozen guys with AK-47s could have just walked in and stolen an armed, fully functional atomic bomb. For years, no one in government noticed this problem, despite its obvious severity. Only recently has Congress begun to provide funding to secure these old nukes from terrorists, criminal syndicates, rogue states and other threats.

Q2). Won’t the US government, Google, or some other large organization with billions of dollars and thousands of employees, be the first ones to develop strong AI?

A2). Very few people in the United States Department of Defense, Google, Harvard, Microsoft, and other such large organizations have shown any interest in building strong AI. The field of strong AI still has a bad reputation from the “AI winter” of the 1970s, when scientists first realized that building a general AI was much more difficult than they had originally thought. The New York Times has said, “At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.” At the 2009 Conference on Artificial General Intelligence, there were fewer than two hundred attendees, and none of them had anything close to the level of funding that the Manhattan Project or Apollo Project had.

Q3). What if the Singularity Institute and its supporters is just another “doomsday cult”, like the religious extremists who talk about how “the end is nigh”?

A3). We at the Singularity Institute do indeed think that humanity is in danger, but this is hardly a “fringe” or “cult-like” opinion. Stephen Hawking, one of the most eminent scientists in the world, has stated that humanity cannot survive indefinitely in its current stage of development. Sir Martin Rees, Astronomer Royal and President of the Royal Society, has just written a book called “Our Final Hour” about the threats facing humanity, and gives our species only a 50% chance of surviving through the 21st century (http://www.amazon.com/Our-Final-Hour-Environmental-Century/dp/0465068626). And Nick Bostrom, director of the Future of Humanity Institute at Oxford, is also one of the foremost proponents of the importance of confronting the dangers to our species.

In addition, the Singularity Institute has very few “cult-like” characteristics. We don’t, as an institution, place our trust in any aliens, supernatural beings, mystical forces, or anything other than the boring old world of science and technology and atoms. We don’t tell anyone which God to worship, how to dress, where to live, who to marry, what to eat, which books to read, which websites to visit, who to vote for, what music to listen to, or any of that sort of stuff. We certainly don’t demand that everyone believe the exact same things; our supporters and friends have a very wide range of beliefs on all subjects. To see a summary of some of these beliefs, check out the comments on this article by SIAI research fellows Eliezer Yudkowsky and Anna Salamon.

Q4). Shouldn’t all of humanity have input on how a Friendly AI should be designed, instead of just a few programmers or scientists?

A4). Of course! However, unfortunately, there is currently no practical way to do this. There are more than six and a half billion people in the world, and the vast majority of them don’t even have Internet access; it would take decades of work and tens of billions of dollars to do anything on that large of a scale. We would love to educate all of humanity about how AI will impact our future, and how our decisions can influence the outcome, but this is not a realistic goal given our current level of resources. If you feel that your voice is not being heard, or that there is some group whose concerns are not being listened to, please do email us at institute@singinst.org.

Q5). Has the Singularity Institute done research and published papers, like other research groups and academic institutions?

A5). Yes! Here are some of the latest papers by our researchers:

Eliezer Yudkowsky:

 

 

Ben Goertzel:

 

 

Carl Shulman:

 

 

Peter de Blanc:

 

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a comment