Q1). What if humans don’t accept being ruled by machines?
A1). Our society has gradually come to accept things that would have sounded preposterous a thousand years ago (racial equality, capitalism, democracy, etc.)
Q2). How do we make sure that an AI doesn’t just end up being a tool of whichever group built it, or controls it?
A2). A sufficiently talented group could build this kind of superintelligent AI, but it’s not inevitable. This is a possible failure mode and we need to take steps to prevent it.
Q3). Aren’t power-hungry organizations going to race to AI technology, and use it to dominate the world, before there’s time to create truly Friendly AI?
A3). This would be a very bad thing, so if there’s a significant possibility of it happening, we need to get cracking on our own project.
Q4). What if an FAI only helps the rich, the First World, uploaded humans, or some other privileged class of elites?
A4). Human social status would probably mean very little to any kind of AI; the AI has no real reason to care how many green pieces of paper we have.
Q5). Since hundreds of thousands of people are dying every day, don’t need AI too urgently to let our research efforts be delayed by having to guarantee Friendliness?
A5). However bad the world may be without AI, a failed UFAI could destroy the world and everything in it, which is much worse.