Slightly offtopic to your questions (which I’ll try to answer in the other branch), but I’m surprised we seem to disagree on some simple stuff...
In my mind UDT1.1 isn’t about equilibrium selection:
1) Here’s a game that requires equilibrium selection, but doesn’t require UDT1.1 (can be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. If both press button 1, both win 100 dollars. If both press button 2, both win 200 dollars. If they press different buttons, they get nothing.
2) Here’s a game that has only one equilibrium, but requires UDT1.1 (can’t be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. The experimenter tells each of them which button to press (maybe the same, maybe different). If they both obey, both win 500 dollars. If only one obeys, both win 100 dollars. Otherwise nothing.
Maybe we understand UDT1 and UDT1.1 differently? I’m pretty sure I’m following Wei’s intent, where UDT1.1 simply fixes the bug in UDT1′s handling of observations.
The fix is straightforward in the case where every agent already has the same source code and preferences. UDT1.1, upon receiving input X, would put that input aside and first iterate through all possible input/output mappings that it could implement and determine the logical consequence of choosing each one upon the executions of the world programs that it cares about. After determining the optimal S* that best satisfies its preferences, it then outputs S*(X).
Since optimal global strategies are also Nash equilibria in the framework of Jessica’s post, we can think of global policy selection as equilibrium selection (at least to the extent that we buy that framework). You top-level post also seems to buy this connection (?).
I think your problem #1 superficially doesn’t sound like it requires UDT1.1, because it is very plausible that UDT1 can solve it based on a particular structure of correlations between Alice and Bob’s actions. But, actually, I suspect we need UDT1.1 to have very good guarantees; UDT1 is solving it via assumptions about correlation structure, not via some proposed mechanism which would systematically believe in such correlations.
I’m unclear on why you’re saying problem #2 requires UDT1.1. It is better to obey, unless you think obeying negatively correlates with your other copy obeying. Is that the source of difficulty you’re pointing at? We need UDT1.1 not to select equilibrium, but to ensure that we’re in any equilibrium at all?
Ah, I see. You’re thinking of both theories in a math-intuition-based setting (“negatively correlates with your other copy” etc). I prefer to use a crisp proof-based setting, so we can discern what we know about the theories from what we hope they would do in a more fuzzy setting.
UDT1 receives an observation X and then looks for provable facts of the form “if all my instances receiving observation X choose to take a certain action, I’ll get a certain utility”.
UDT1.1 also receives an observation X, but handles it differently. It looks for provable facts of the form “if all my instances receiving various observations choose to use a certain mapping from observations to actions, I’ll get a certain utility”. Then it looks up the action corresponding to X in the mapping.
In problem 2, a UDT1 player who’s told to press button 1 will look for facts like “if everyone who’s told to press button 1 complies, then utility is 500”. But there’s no easy way to prove such a fact. The utility value can only be inferred from the actions of both players, who might receive different observations. That’s why UDT1.1 is needed—to fix UDT1′s bug with handling observations.
The crisp setting makes it clear that UDT1.1 is about making more equilibria reachable, not about equilibrium selection. A game can have several equilibria, all of them reachable without UDT1.1, like my problem 1. Or it can have one equilibrium but require UDT1.1 to reach it, like my problem 2.
Of course, when we move to a math-intuition-based setting, the difference might become more fuzzy. Maybe UDT1 will solve some problems it couldn’t solve before, or maybe not. The only way to know is by formalizing math intuition.
Slightly offtopic to your questions (which I’ll try to answer in the other branch), but I’m surprised we seem to disagree on some simple stuff...
In my mind UDT1.1 isn’t about equilibrium selection:
1) Here’s a game that requires equilibrium selection, but doesn’t require UDT1.1 (can be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. If both press button 1, both win 100 dollars. If both press button 2, both win 200 dollars. If they press different buttons, they get nothing.
2) Here’s a game that has only one equilibrium, but requires UDT1.1 (can’t be solved by UDT1). Alice and Bob are placed in separate rooms with two numbered buttons each. The experimenter tells each of them which button to press (maybe the same, maybe different). If they both obey, both win 500 dollars. If only one obeys, both win 100 dollars. Otherwise nothing.
Maybe we understand UDT1 and UDT1.1 differently? I’m pretty sure I’m following Wei’s intent, where UDT1.1 simply fixes the bug in UDT1′s handling of observations.
The title of the UDT1.1 post is “explicit optimization of global strategy”. The key paragraph:
Since optimal global strategies are also Nash equilibria in the framework of Jessica’s post, we can think of global policy selection as equilibrium selection (at least to the extent that we buy that framework). You top-level post also seems to buy this connection (?).
I think your problem #1 superficially doesn’t sound like it requires UDT1.1, because it is very plausible that UDT1 can solve it based on a particular structure of correlations between Alice and Bob’s actions. But, actually, I suspect we need UDT1.1 to have very good guarantees; UDT1 is solving it via assumptions about correlation structure, not via some proposed mechanism which would systematically believe in such correlations.
I’m unclear on why you’re saying problem #2 requires UDT1.1. It is better to obey, unless you think obeying negatively correlates with your other copy obeying. Is that the source of difficulty you’re pointing at? We need UDT1.1 not to select equilibrium, but to ensure that we’re in any equilibrium at all?
Ah, I see. You’re thinking of both theories in a math-intuition-based setting (“negatively correlates with your other copy” etc). I prefer to use a crisp proof-based setting, so we can discern what we know about the theories from what we hope they would do in a more fuzzy setting.
UDT1 receives an observation X and then looks for provable facts of the form “if all my instances receiving observation X choose to take a certain action, I’ll get a certain utility”.
UDT1.1 also receives an observation X, but handles it differently. It looks for provable facts of the form “if all my instances receiving various observations choose to use a certain mapping from observations to actions, I’ll get a certain utility”. Then it looks up the action corresponding to X in the mapping.
In problem 2, a UDT1 player who’s told to press button 1 will look for facts like “if everyone who’s told to press button 1 complies, then utility is 500”. But there’s no easy way to prove such a fact. The utility value can only be inferred from the actions of both players, who might receive different observations. That’s why UDT1.1 is needed—to fix UDT1′s bug with handling observations.
The crisp setting makes it clear that UDT1.1 is about making more equilibria reachable, not about equilibrium selection. A game can have several equilibria, all of them reachable without UDT1.1, like my problem 1. Or it can have one equilibrium but require UDT1.1 to reach it, like my problem 2.
Of course, when we move to a math-intuition-based setting, the difference might become more fuzzy. Maybe UDT1 will solve some problems it couldn’t solve before, or maybe not. The only way to know is by formalizing math intuition.