Eliezer: There are a few classes of situations I can think of in which it seems like the correct solution (given certain restrictions) requires a source of randomness:
One example would be Hofstadter’s Luring Lottery. Assuming all the agents really did have common knowledge of each other’s rationality, and assuming no form of communication is allowed between them in any way, isn’t it true that no deterministic algorithm to decide how many entries to send in, if any at all, has a higher expected return than a randomized one? (that is, the solutions which are better are randomized ones, rather than randomized ones automatically being better solutions)
The reason being that you’ve got a prisoner’s dilemma type situation there with all the agents using the same decision algorithm. So any deterministic algorithm means they all do the same thing, which means many entries are sent in, which means the pot shrinks, while not at all boosting any individual’s agent’s expected winnings.
Eliezer: There are a few classes of situations I can think of in which it seems like the correct solution (given certain restrictions) requires a source of randomness:
One example would be Hofstadter’s Luring Lottery. Assuming all the agents really did have common knowledge of each other’s rationality, and assuming no form of communication is allowed between them in any way, isn’t it true that no deterministic algorithm to decide how many entries to send in, if any at all, has a higher expected return than a randomized one? (that is, the solutions which are better are randomized ones, rather than randomized ones automatically being better solutions)
The reason being that you’ve got a prisoner’s dilemma type situation there with all the agents using the same decision algorithm. So any deterministic algorithm means they all do the same thing, which means many entries are sent in, which means the pot shrinks, while not at all boosting any individual’s agent’s expected winnings.
My shot at improving on randomness for things like the Luring Lottery: http://lesswrong.com/lw/vp/worse_than_random/18yi