Here’s a question. Would you be willing to pick, say, the tenth-most efficacious arguments and downward, and make them public? I understand the desire to keep anything that could actually work secret, but I’d still like to see what sort of arguments might work. (I’ve gotten a few hints from this, but I certainly couldn’t put them into practice...)
Hmm...
Here’s a question. Would you be willing to pick, say, the tenth-most efficacious arguments and downward, and make them public? I understand the desire to keep anything that could actually work secret, but I’d still like to see what sort of arguments might work. (I’ve gotten a few hints from this, but I certainly couldn’t put them into practice...)
I’ll have to think carefully about revealing my own unique ones, but I’ll add that a good chunk of my less efficacious arguments are already public.
For instance, you can find a repertoire of arguments here:
http://rationalwiki.org/wiki/AI-box_experiment http://ordinary-gentlemen.com/blog/2010/12/01/the-ai-box-experiment http://lesswrong.com/lw/9j4/ai_box_role_plays/ http://lesswrong.com/lw/6ka/aibox_experiment_the_acausal_trade_argument/ http://lesswrong.com/lw/ab3/superintelligent_agi_in_a_box_a_question/ http://michaelgr.com/2008/10/08/my-theory-on-the-ai-box-experiment/
and of course, http://lesswrong.com/lw/gej/i_attempted_the_ai_box_experiment_and_lost/