The secrecy aspect of these games continues to rub me the wrong way.
I understand the argument—that an enumeration of strategies an oracle A.I. might take would only serve as a list of things a critic could point to and claim, “None of these would ever convince me!”.
But the alternative is that critics continue to claim “an oracle A.I. could never convince me!”, and the only ‘critics’ whose minds have been changed are actually just skeptical readers of lesswrong.com already familiar with the arguments of friendly A.I. who happen to invest multiple hours of time actually partaking in a simulation of the whole procedure.
So I suppose my point is two-fold:
Anonymous testimony without chatlogs don’t actually convince skeptics of anything.
Discussions of actual strategies at worst inform readers of avenues of attack the readers might not have thought about, and at double worst supply people that probably won’t ever be convinced that oracle AIs might be dangerous with a list of things to pretend they’re immune to.
I’m not so sure we’d gain that much larger of an audience by peering under the hood. I’d expect the demystifying effect and hindsight bias to counteract most of the persuasive power of hard details, though I suppose only Eliezer, Tuxedage, and their guardians can determine that.
But I’m also concerned that this might drag our community a bit too far into AI-Box obsession. This should just be a cute thought experiment, not a blood sport; I don’t want to see people get hurt by it unless we’re especially confident that key minds will be changed. Some of the Dark Arts exhibited in these games are probably harmful to know about, and having the logs on the public Internet associated with LessWrong could look pretty awful. Again, this is something only the participants can determine.
Even someone who isn’t persuaded by an “AI” character in a log will come away with the impression that AIs could be particularly persuasive. In a world where most people don’t really imagine AIs, this impression might be relevant news for a lot of people and can only help FAI research.
The secrecy aspect of these games continues to rub me the wrong way.
I understand the argument—that an enumeration of strategies an oracle A.I. might take would only serve as a list of things a critic could point to and claim, “None of these would ever convince me!”.
But the alternative is that critics continue to claim “an oracle A.I. could never convince me!”, and the only ‘critics’ whose minds have been changed are actually just skeptical readers of lesswrong.com already familiar with the arguments of friendly A.I. who happen to invest multiple hours of time actually partaking in a simulation of the whole procedure.
So I suppose my point is two-fold:
Anonymous testimony without chatlogs don’t actually convince skeptics of anything.
Discussions of actual strategies at worst inform readers of avenues of attack the readers might not have thought about, and at double worst supply people that probably won’t ever be convinced that oracle AIs might be dangerous with a list of things to pretend they’re immune to.
I’m not so sure we’d gain that much larger of an audience by peering under the hood. I’d expect the demystifying effect and hindsight bias to counteract most of the persuasive power of hard details, though I suppose only Eliezer, Tuxedage, and their guardians can determine that.
But I’m also concerned that this might drag our community a bit too far into AI-Box obsession. This should just be a cute thought experiment, not a blood sport; I don’t want to see people get hurt by it unless we’re especially confident that key minds will be changed. Some of the Dark Arts exhibited in these games are probably harmful to know about, and having the logs on the public Internet associated with LessWrong could look pretty awful. Again, this is something only the participants can determine.
Even someone who isn’t persuaded by an “AI” character in a log will come away with the impression that AIs could be particularly persuasive. In a world where most people don’t really imagine AIs, this impression might be relevant news for a lot of people and can only help FAI research.
Reading a log and engaging in a conversation are very different experiences.