I also don’t understand the general case of these problems, but from what I understand, discussing payoff matrices is the wrong thing to do—it’s about jumbles of wires, not “rational agents”, and dancing around computational complexity, not figuring out simple strategies or heuristics, such as not listening to threats (the “baseline” harm discussion—seems to end on the suggestion that rational agents won’t blackmail other rational agents to begin with—but what would you do when you are blackmailed by a jumble of wires?).
These concepts seem important: logical uncertainty (state of partial computation, program code vs. denotation, state vs. dynamics, proof vs. cut-free proof), observational uncertainty (and its combination with logical uncertainty), acausal control (“logical control”, a situation when a system X is defined using the state (partial computation) of agent A, so that A’s decisions control what X is—this means that we are interested in the way X works, not just what it does—that is not its denotation, not its strategy), recursive acausal control (what happens when the agent controls environment by conceptualizing it as containing the agent, or when two agents think about each other). The latter is the crux of most games, and seems incompatible with Bayesian networks, requiring thinking about algorithms (but not just their semantics—acausal control is interested in the way things work, not just what they do).
I also don’t understand the general case of these problems, but from what I understand, discussing payoff matrices is the wrong thing to do—it’s about jumbles of wires, not “rational agents”, and dancing around computational complexity, not figuring out simple strategies or heuristics, such as not listening to threats (the “baseline” harm discussion—seems to end on the suggestion that rational agents won’t blackmail other rational agents to begin with—but what would you do when you are blackmailed by a jumble of wires?).
These concepts seem important: logical uncertainty (state of partial computation, program code vs. denotation, state vs. dynamics, proof vs. cut-free proof), observational uncertainty (and its combination with logical uncertainty), acausal control (“logical control”, a situation when a system X is defined using the state (partial computation) of agent A, so that A’s decisions control what X is—this means that we are interested in the way X works, not just what it does—that is not its denotation, not its strategy), recursive acausal control (what happens when the agent controls environment by conceptualizing it as containing the agent, or when two agents think about each other). The latter is the crux of most games, and seems incompatible with Bayesian networks, requiring thinking about algorithms (but not just their semantics—acausal control is interested in the way things work, not just what they do).