What would you do if AI were dangerous?

If we knew how to build a machine that chooses its outputs as to maximize some property of the surrounding universe, such a machine would be very dangerous, because maximizing almost any easily defined property leads to a worthless universe (without humans, or with humans living pointless lives, etc.) I believe the preceding statement is uncontroversial, and most arguments around the necessity of Friendly AI are really about how likely we are to build such a machine, or maybe something else will happen first, etc.

Instead of adding to the existing arguments, I want to reframe the question thus: what course of action would you recommend to a small group of smart people, assuming for the moment that the danger is real? In other words, what should SingInst do on an alternate Earth where normal human science will eventually build unfriendly AI? In particular:

- How do you craft your message to the public?

- What’s your hiring policy?

- Do you keep your research secret?

- Do you pursue alternate avenues like uploads, or focus only on FAI?

For the sake of inconvenience, assume that many (though not all) of the insights required for developing FAI can also be easily repurposed to hasten the arrival of UFAI.

Thanks to Wei Dai for the conversation that sparked this post.