Sarah Connor and Existential Risk

It’s probably easier to build an uncaring AI than a friendly one. So, if we assume that someone, somewhere is trying to build an AI without solving friendliness, that person will probably finish before someone who’s trying to build a friendly AI.

[redacted]

[redacted]

further edit:

Wow, this is getting a rather stronger reaction than I’d anticipated. Clarification: I’m not suggesting practical measures that should be implemented. Jeez. I’m deep in an armchair, thinking about a problem that (for the moment) looks very hypothetical.

For future reference, how should I have gone about asking this question without seeming like I want to mobilize the Turing Police?