There are situations where two agents that can read each other’s source code want to have a bit of random information (e.g., they want to cooperate and split an indivisible thing by deciding randomly who’ll have it and having half of it in expectation).
If these agents don’t have access to unpredictable sources of randomness (e.g., it’s acausal trade through very detailed simulations and they can perfectly predict parts of each other’s environment), is there any way for them to coordinate on generating a random bit in a way that can’t be manipulated/exploited?
I feel like the answer should be “no”, as schemes like “both transparently don’t look at a part of the other’s source code when they coordinate on the scheme, then look and generate the bit from these parts” fail due to the possibility of something else looking at the whole of the agent A’s source code first and then designing a successor/a simulated agent B which won’t look there but will produce the needed bit.
But maybe I’m missing something?
In principle (depending upon computation models) this should be possible.
With this very great degree of knowledge of how the other operates, it should be possible to get a binary result by each agent:
choosing some computable real number N and some digit position M,
that they have no current expectation of being biased,
compute it,
use the other’s source code to compute the other’s digit,
combine the digits (e.g. xor for binary),
verify that the other didn’t cheat,
use the result to enact the decision.
In principle, each agent can use the other’s source code to verify that the other will not cheat in any of these steps.
Even if B currently knows a lot more about values of specific numbers than A does, that doesn’t help B get the result they want. B has to choose a number+position that B doesn’t expect to be biased, and A can check whether they really did not expect it to be biased.
Note that this, like almost anything to do with agents verifying each other via source code, is purely theoretical and utterly useless in practice. In practice step 6 will be impossible for at least one party.
Agent A doesn’t know that the creators of agent B didn’t run the whole interaction with a couple of different versions of B’s code until finding one that results in N and M that produce the bit they want. You can’t deduce that by polluting at B’s code.
I’m very confused what the model is here. Are you saying that agents A and B (with source code) are just proxies created by other agents C and D (internal details of which are unknown to the agents on the other side of the communication/acausal barrier)?
What is the actual mechanism by which A knows B’s source code and vice versa, without any communication or any causal links? How does A know that D won’t just ignore whatever decision B makes and vice versa?