I’m staying out of this discussion mainly because I’m incredibly confused about acausal/timeless/counterfactual trade/blackmail. Eliezer gave a small presentation at the recent decision theory mini-workshop on his ideas but unlike Stuart I’m pretty sure I don’t understand it. I’ve been told there are also some very rough notes/drafts on related ideas written by a couple of individuals floating around SIAI and FHI, but so far I have been unsuccessful in getting access to them.
ETA: I should mention that the workshop was very enjoyable and I greatly appreciate SIAI’s efforts in setting it up, even though I came out more confused than I did going in. That just means I wasn’t confused nearly enough previously. :)
I also don’t understand the general case of these problems, but from what I understand, discussing payoff matrices is the wrong thing to do—it’s about jumbles of wires, not “rational agents”, and dancing around computational complexity, not figuring out simple strategies or heuristics, such as not listening to threats (the “baseline” harm discussion—seems to end on the suggestion that rational agents won’t blackmail other rational agents to begin with—but what would you do when you are blackmailed by a jumble of wires?).
These concepts seem important: logical uncertainty (state of partial computation, program code vs. denotation, state vs. dynamics, proof vs. cut-free proof), observational uncertainty (and its combination with logical uncertainty), acausal control (“logical control”, a situation when a system X is defined using the state (partial computation) of agent A, so that A’s decisions control what X is—this means that we are interested in the way X works, not just what it does—that is not its denotation, not its strategy), recursive acausal control (what happens when the agent controls environment by conceptualizing it as containing the agent, or when two agents think about each other). The latter is the crux of most games, and seems incompatible with Bayesian networks, requiring thinking about algorithms (but not just their semantics—acausal control is interested in the way things work, not just what they do).
but unlike Stuart I’m pretty sure I don’t understand it.
Sigh… you may be the wiser of the two of us.
I understand certain formal models that ressemble the blackmail problem, but have a sloppy understanding of the exact conditions where they apply. Will edit the post to insert the formal model.
I’m staying out of this discussion mainly because I’m incredibly confused about acausal/timeless/counterfactual trade/blackmail. Eliezer gave a small presentation at the recent decision theory mini-workshop on his ideas but unlike Stuart I’m pretty sure I don’t understand it. I’ve been told there are also some very rough notes/drafts on related ideas written by a couple of individuals floating around SIAI and FHI, but so far I have been unsuccessful in getting access to them.
ETA: I should mention that the workshop was very enjoyable and I greatly appreciate SIAI’s efforts in setting it up, even though I came out more confused than I did going in. That just means I wasn’t confused nearly enough previously. :)
I also don’t understand the general case of these problems, but from what I understand, discussing payoff matrices is the wrong thing to do—it’s about jumbles of wires, not “rational agents”, and dancing around computational complexity, not figuring out simple strategies or heuristics, such as not listening to threats (the “baseline” harm discussion—seems to end on the suggestion that rational agents won’t blackmail other rational agents to begin with—but what would you do when you are blackmailed by a jumble of wires?).
These concepts seem important: logical uncertainty (state of partial computation, program code vs. denotation, state vs. dynamics, proof vs. cut-free proof), observational uncertainty (and its combination with logical uncertainty), acausal control (“logical control”, a situation when a system X is defined using the state (partial computation) of agent A, so that A’s decisions control what X is—this means that we are interested in the way X works, not just what it does—that is not its denotation, not its strategy), recursive acausal control (what happens when the agent controls environment by conceptualizing it as containing the agent, or when two agents think about each other). The latter is the crux of most games, and seems incompatible with Bayesian networks, requiring thinking about algorithms (but not just their semantics—acausal control is interested in the way things work, not just what they do).
Sigh… you may be the wiser of the two of us.
I understand certain formal models that ressemble the blackmail problem, but have a sloppy understanding of the exact conditions where they apply. Will edit the post to insert the formal model.