Ignoring, again for the sake of a limited argument, all the ideas about planet-sized AIs and superintelligence, and it’s still easy to see that AI which can out-think human beings and which has no interest in their survival ought to be possible. So even in this humbler futurology, AI is still an extinction risk.
Voted up for this argument. I think the SIAI would be well-served for accruing donations, support, etc. by emphasizing this point more.
Space organizations might similarly argue: “You might think our wilder ideas are full of it, but even if we can’t ever colonize Mars, you’ll still be getting your satellite communications network.”
Voted up for this argument. I think the SIAI would be well-served for accruing donations, support, etc. by emphasizing this point more.
Space organizations might similarly argue: “You might think our wilder ideas are full of it, but even if we can’t ever colonize Mars, you’ll still be getting your satellite communications network.”