The motivating principle is to treat one’s choice of decision theory as itself strategic.
I share the intuition that this lens is important. Indeed, there might be some important quantitative differences between a) I have a well-defined decision theory, and am choosing how to build my successor and b) I’m doing some vague normative reasoning to choose a decision theory (like we’re doing right now), but I think these differences are mostly contingent, and the same fundamental dynamics about strategicness are at play in both scenarios.
Design your decision theory so that no information is hazardous to it
I think this is equivalent to your decision theory being dynamically stable (that is, its performance never improves by having access to commitments), and I’m pretty sure the only way to attain this is complete updatelessness (which is bad).
That said, again, it might perfectly be that given our prior, many parts of cooperation-relevant concept-space seem very safe to explore, and so “for all practical purposes” some decision procedures are basically completely safe, and we’re able to use them to coordinate with all agents (even if we haven’t “solved in all prior-independent generality” the fundamental trade-off between updatelessness and updatefulness).
Got it, I think I understand better the problem you’re trying to solve! It’s not just being able to design a particular software system and give it good priors, it’s also finding a framework that’s robust to our initial choice of priors.
Is it possible for all possible priors to converge on optimal behavior, even given unlimited observations? I’m thinking of Yudkowsky’s example of the anti-Occamian and anti-Laplacian priors: the more observations an anti-Laplacian agent makes, the further its beliefs go from the truth.
I’m also surprised that dynamic stability leads to suboptimal outcomes that are predictable in advance. Intuitively, it seems like this should never happen.
Is it possible for all possible priors to converge on optimal behavior, even given unlimited observations?
Certainly not, in the most general case, as you correctly point out.
Here I was studying a particular case: updateless agents in a world remotely looking like the real world. And even more particular: thinking about the kinds of priors that superintelligences created in the real world might actually have.
Eliezer believes that, in these particular cases, it’s very likely we will get optimal behavior (we won’t get trapped priors, nor commitment races). I disagree, and that’s what I argue in the post.
I’m also surprised that dynamic stability leads to suboptimal outcomes that are predictable in advance. Intuitively, it seems like this should never happen.
If by “predictable in advance” you mean “from the updateless agent’s prior”, then nope! Updatelessness maximizes EV from the prior, so it will do whatever looks best from this perspective. If that’s what you want, then updatelessness is for you! The problem is, we have many pro tanto reasons to think this is not a good representation of rational decision-making in reality, nor the kind of cognition that survives for long in reality. Because of considerations about “the world being so complex that your prior will be missing a lot of stuff”. And in particular, multi-agentic scenarios are something that makes this complexity sky-rocket. Of course, you can say “but that consideration will also be included in your prior”. And that does make the situation better. But eventually your prior needs to end. And I argue, that’s much before you have all the necessary information to confidently commit to something forever (but other people might disagree with this).
It seems like trapped priors and commitment races are exactly the sort of cognitive dysfunction that updatelessness would solve in generality.
My understanding is that trapped priors are a symptom of a dysfunctional epistemology, which over-weights prior beliefs when updating on new observations. This results in an agent getting stuck, or even getting more and more confident in their initial position, regardless of what observations they actually make.
Similarly, commitment races are the result of dysfunctional reasoning that regards accurate information about other agents as hazardous. It seems like the consensus is that updatelessness is the general solution to infohazards.
My current model of an “updateless decision procedure”, approximated on a real computer, is something like “a policy which is continuously optimized, as an agent has more time to think, and the agent always acts according to the best policy it’s found so far.” And I like the model you use in your report, where an ecosystem of participants collectively optimize a data structure used to make decisions.
Since updateless agents use a fixed optimization criterion for evaluating policies, we can use something like an optimization market to optimize an agent’s policy. It seems easy to code up traders that identify “policies produced by (approximations of) Bayesian reasoning”, which I suspect won’t be subject to trapped priors.
So updateless agents seem like they should be able to do at least as well as updateful agents. Because they can identify updateful policies, and use those if they seem optimal. But they can also use different reasoning to identify policies like “pay Paul Ekman to drive you out of the desert”, and automatically adopt those when they lead to higher EV than updateful policies.
I suspect that the generalization of updatelessness to multi-agent scenarios will involve optimizing over the joint policy space, using a social choice theory to score joint policies. If agents agree at the meta level about “how conflicts of interest should be resolved”, then that seems like a plausible route for them to coordinate on socially optimal joint policies.
I think this approach also avoids the sky-rocketing complexity problem, if I understand the problem you’re pointing to. (I think the problem you’re pointing to involves trying to best-respond to another agent’s cognition, which gets more difficult as that agent becomes more complicated.)
I share the intuition that this lens is important. Indeed, there might be some important quantitative differences between
a) I have a well-defined decision theory, and am choosing how to build my successor
and
b) I’m doing some vague normative reasoning to choose a decision theory (like we’re doing right now),
but I think these differences are mostly contingent, and the same fundamental dynamics about strategicness are at play in both scenarios.
I think this is equivalent to your decision theory being dynamically stable (that is, its performance never improves by having access to commitments), and I’m pretty sure the only way to attain this is complete updatelessness (which is bad).
That said, again, it might perfectly be that given our prior, many parts of cooperation-relevant concept-space seem very safe to explore, and so “for all practical purposes” some decision procedures are basically completely safe, and we’re able to use them to coordinate with all agents (even if we haven’t “solved in all prior-independent generality” the fundamental trade-off between updatelessness and updatefulness).
Got it, I think I understand better the problem you’re trying to solve! It’s not just being able to design a particular software system and give it good priors, it’s also finding a framework that’s robust to our initial choice of priors.
Is it possible for all possible priors to converge on optimal behavior, even given unlimited observations? I’m thinking of Yudkowsky’s example of the anti-Occamian and anti-Laplacian priors: the more observations an anti-Laplacian agent makes, the further its beliefs go from the truth.
I’m also surprised that dynamic stability leads to suboptimal outcomes that are predictable in advance. Intuitively, it seems like this should never happen.
Certainly not, in the most general case, as you correctly point out.
Here I was studying a particular case: updateless agents in a world remotely looking like the real world. And even more particular: thinking about the kinds of priors that superintelligences created in the real world might actually have.
Eliezer believes that, in these particular cases, it’s very likely we will get optimal behavior (we won’t get trapped priors, nor commitment races). I disagree, and that’s what I argue in the post.
If by “predictable in advance” you mean “from the updateless agent’s prior”, then nope! Updatelessness maximizes EV from the prior, so it will do whatever looks best from this perspective. If that’s what you want, then updatelessness is for you! The problem is, we have many pro tanto reasons to think this is not a good representation of rational decision-making in reality, nor the kind of cognition that survives for long in reality. Because of considerations about “the world being so complex that your prior will be missing a lot of stuff”. And in particular, multi-agentic scenarios are something that makes this complexity sky-rocket.
Of course, you can say “but that consideration will also be included in your prior”. And that does make the situation better. But eventually your prior needs to end. And I argue, that’s much before you have all the necessary information to confidently commit to something forever (but other people might disagree with this).
Got it, thank you!
It seems like trapped priors and commitment races are exactly the sort of cognitive dysfunction that updatelessness would solve in generality.
My understanding is that trapped priors are a symptom of a dysfunctional epistemology, which over-weights prior beliefs when updating on new observations. This results in an agent getting stuck, or even getting more and more confident in their initial position, regardless of what observations they actually make.
Similarly, commitment races are the result of dysfunctional reasoning that regards accurate information about other agents as hazardous. It seems like the consensus is that updatelessness is the general solution to infohazards.
My current model of an “updateless decision procedure”, approximated on a real computer, is something like “a policy which is continuously optimized, as an agent has more time to think, and the agent always acts according to the best policy it’s found so far.” And I like the model you use in your report, where an ecosystem of participants collectively optimize a data structure used to make decisions.
Since updateless agents use a fixed optimization criterion for evaluating policies, we can use something like an optimization market to optimize an agent’s policy. It seems easy to code up traders that identify “policies produced by (approximations of) Bayesian reasoning”, which I suspect won’t be subject to trapped priors.
So updateless agents seem like they should be able to do at least as well as updateful agents. Because they can identify updateful policies, and use those if they seem optimal. But they can also use different reasoning to identify policies like “pay Paul Ekman to drive you out of the desert”, and automatically adopt those when they lead to higher EV than updateful policies.
I suspect that the generalization of updatelessness to multi-agent scenarios will involve optimizing over the joint policy space, using a social choice theory to score joint policies. If agents agree at the meta level about “how conflicts of interest should be resolved”, then that seems like a plausible route for them to coordinate on socially optimal joint policies.
I think this approach also avoids the sky-rocketing complexity problem, if I understand the problem you’re pointing to. (I think the problem you’re pointing to involves trying to best-respond to another agent’s cognition, which gets more difficult as that agent becomes more complicated.)