[Pre-emptive apologies for the stream-of-consciousness: I made the mistake of thinking while I wrote. Hopefully I ended up somewhere reasonable, but I make no promises]
simulating HCH or anything really doesn’t require altering the action set of a human/agent
My point there wasn’t that it requires it, but that it entails it. After any action by the LCDT agent, the distribution over future action sets of some agents will differ from those same distributions based on the prior (perhaps very slightly).
E.g. if I burn your kite, your actual action set doesn’t involve kite-flying; your prior action set does. After I take the [burn kite] action, my prediction of [kite exists] doesn’t have a reliable answer.
If I’m understanding correctly (and, as ever, I may not be), this is just to say that it’d come out differently based on the way you set up the pre-link-cutting causal diagram. If the original diagram effectively had [kite exists iff Adam could fly kite], then I’d think it’d still exist after [burn kite]; if the original had [kite exists iff Joe didn’t burn kite] then I’d think that it wouldn’t.
In the real world, those two setups should be logically equivalent. The link-cutting breaks the equivalence. Each version of the final diagram functions in its own terms, but the answer to [kite exists] becomes an artefact of the way we draw the initial diagram. (I think!)
In this sense, it’s incoherent (so Evan’s not claiming there’s no bullet, but that he’s biting it); it’s just less clear that it matters that it’s incoherent.
I still tend to think that it does matter—but I’m not yet sure whether it’s just offending my delicate logical sensibilities, or if there’s a real problem.
For instance, in my reply to Evan, I think the [delete yourself to free up memory] action probably looks good if there’s e.g. an [available memory] node directly downstream of the [delete yourself...] action. If instead the path goes [delete yourself...] --> [memory footprint of future self] --> [available memory], then deleting yourself isn’t going to look useful, since [memory footprint...] shouldn’t change.
Perhaps it’d work in general to construct the initial causal diagrams in this way: You route maximal causality through agents, when there’s any choice. So you then tend to get [LCDT action] --> [Agent action-set-alteration] --> [Whatever can be deduced from action-set-alteration].
You couldn’t do precisely this in general, since you’d need backwards-in-time causality—but I think you could do some equivalent. I.e. you’d put an [expected agent action set distribution] node immediately after the LCDT decision, treat that like an agent at decision time, and deduce values of intermediate nodes from that.
So in my kite example, let’s say you’ll only get to fly your kite (if it exists) two months from my decision, and there’s a load of intermediate nodes. But directly downstream of my [burn kite] action we put a [prediction of Adam’s future action set] node. All of the causal implications of [burn kite] get routed through the action set prediction node.
Then at decision time the action-set prediction node gets treated as part of an agent, and there’s no incoherence. (but I predict that my [burn kite] fails to burn your kite)
Anyway, quite possibly doing things this way would have a load of downsides (or perhaps it doesn’t even work??), but it seems plausible to me.
My remaining worry is whether getting rid of the incoherence in this way is too limiting—since the LCDT agent gets left thinking its actions do almost nothing (given that many/most actions would be followed by nodes which negate their consequences relative to the prior).
[I’ll think more about whether I’m claiming much/any of this impacts the simulation setup (beyond any self-deletion issues)]
[Pre-emptive apologies for the stream-of-consciousness: I made the mistake of thinking while I wrote. Hopefully I ended up somewhere reasonable, but I make no promises]
My point there wasn’t that it requires it, but that it entails it. After any action by the LCDT agent, the distribution over future action sets of some agents will differ from those same distributions based on the prior (perhaps very slightly).
E.g. if I burn your kite, your actual action set doesn’t involve kite-flying; your prior action set does. After I take the [burn kite] action, my prediction of [kite exists] doesn’t have a reliable answer.
If I’m understanding correctly (and, as ever, I may not be), this is just to say that it’d come out differently based on the way you set up the pre-link-cutting causal diagram. If the original diagram effectively had [kite exists iff Adam could fly kite], then I’d think it’d still exist after [burn kite]; if the original had [kite exists iff Joe didn’t burn kite] then I’d think that it wouldn’t.
In the real world, those two setups should be logically equivalent. The link-cutting breaks the equivalence. Each version of the final diagram functions in its own terms, but the answer to [kite exists] becomes an artefact of the way we draw the initial diagram. (I think!)
In this sense, it’s incoherent (so Evan’s not claiming there’s no bullet, but that he’s biting it); it’s just less clear that it matters that it’s incoherent.
I still tend to think that it does matter—but I’m not yet sure whether it’s just offending my delicate logical sensibilities, or if there’s a real problem.
For instance, in my reply to Evan, I think the [delete yourself to free up memory] action probably looks good if there’s e.g. an [available memory] node directly downstream of the [delete yourself...] action.
If instead the path goes [delete yourself...] --> [memory footprint of future self] --> [available memory], then deleting yourself isn’t going to look useful, since [memory footprint...] shouldn’t change.
Perhaps it’d work in general to construct the initial causal diagrams in this way:
You route maximal causality through agents, when there’s any choice.
So you then tend to get [LCDT action] --> [Agent action-set-alteration] --> [Whatever can be deduced from action-set-alteration].
You couldn’t do precisely this in general, since you’d need backwards-in-time causality—but I think you could do some equivalent. I.e. you’d put an [expected agent action set distribution] node immediately after the LCDT decision, treat that like an agent at decision time, and deduce values of intermediate nodes from that.
So in my kite example, let’s say you’ll only get to fly your kite (if it exists) two months from my decision, and there’s a load of intermediate nodes.
But directly downstream of my [burn kite] action we put a [prediction of Adam’s future action set] node. All of the causal implications of [burn kite] get routed through the action set prediction node.
Then at decision time the action-set prediction node gets treated as part of an agent, and there’s no incoherence. (but I predict that my [burn kite] fails to burn your kite)
Anyway, quite possibly doing things this way would have a load of downsides (or perhaps it doesn’t even work??), but it seems plausible to me.
My remaining worry is whether getting rid of the incoherence in this way is too limiting—since the LCDT agent gets left thinking its actions do almost nothing (given that many/most actions would be followed by nodes which negate their consequences relative to the prior).
[I’ll think more about whether I’m claiming much/any of this impacts the simulation setup (beyond any self-deletion issues)]