I think your distinction between causal and protocol abstractions makes sense and it’s related to my distinction between causally relevant vs causally irrelevant latent variables. It’s not quite the same, because abstractions which are rendered causally irrelevant in some world model can still be causal in the sense of aggregating together a bunch of things with similar causal properties.
I feel like personal identity has both elements of causal abstraction and of protocol abstraction. E.g. social relationships like debts seem to be strongly tied to protocol abstraction, but there’s also lots of social behavior that only relies on causal abstraction.
I agree.
Coming up with a normative theory of agency in the case of protocol abstraction actually sounds like a fairly important task. I have some ideas about how to address causal abstraction, but I haven’t really thought much about protocol abstraction before.
Can you clarify what you mean by a “normative theory of agency”? I don’t think I’ve ever seen this phrase before.
Can you clarify what you mean by a “normative theory of agency”? I don’t think I’ve ever seen this phrase before.
What I mean is stuff like decision theory/selection theorems/rationality; studies of what kinds of ways agents normatively should act.
Usually such theories do not take abstractions into account. I have some ideas for how to take causal abstractions into account, but I don’t think I’ve seen protocol abstractions investigated much.
In a sense, they could technically be handled by just having utility functions over universe trajectories rather than universe states, but there are some things about this that seem unnatural (e.g. for the purpose of Alex Turner’s power-seeking theorems, utility functions over trajectories may be extraordinarily power-seeking, and so if we could find a narrower class of utility functions, that would be useful).
I think your distinction between causal and protocol abstractions makes sense and it’s related to my distinction between causally relevant vs causally irrelevant latent variables. It’s not quite the same, because abstractions which are rendered causally irrelevant in some world model can still be causal in the sense of aggregating together a bunch of things with similar causal properties.
I agree.
Can you clarify what you mean by a “normative theory of agency”? I don’t think I’ve ever seen this phrase before.
What I mean is stuff like decision theory/selection theorems/rationality; studies of what kinds of ways agents normatively should act.
Usually such theories do not take abstractions into account. I have some ideas for how to take causal abstractions into account, but I don’t think I’ve seen protocol abstractions investigated much.
In a sense, they could technically be handled by just having utility functions over universe trajectories rather than universe states, but there are some things about this that seem unnatural (e.g. for the purpose of Alex Turner’s power-seeking theorems, utility functions over trajectories may be extraordinarily power-seeking, and so if we could find a narrower class of utility functions, that would be useful).