You Only Get One Shot: an Intuition Pump for Embedded Agency

This is a short attempt to articulate a framing which I sometimes find useful for thinking about embedded agency. I noticed that I wanted to refer to it a few times in conversations and other writings.

A useful stance for thinking about embedded agents takes as more primitive, or fundamental, ‘actor-moments’ rather than (temporally-extended) ‘agents’ or ‘actors’. The key property of these actor-moments is that they get one action—one opportunity to ‘do something’ - before becoming simply part of the history of the world: no longer actual.

This is just one of the implications of embedded agency, but sometimes pulling out more specific consequences helps to motivate progress on ideas. It is an intuition pump, and, as with the exemplar archetype for intuition pumps, it does not tell the whole story and should be used with caution.

The Cartesian picture

It is often convenient to consider a decision algorithm to persist through time, separated from its environment by a Cartesian boundary. The agent receives (perhaps partial) observations from the environment, performs some computation, and takes some action (perhaps updating some internal state, learning from observations as it goes). The resulting change in the environment produces some new observation and the process continues.

This is convenient because it is often empirically approximately true for the actors we encounter[1].

Only one shot

In contrast, in reality, any act, and the concomitant changes in the environment, impinge on the actor (which is after all part of the environment), even if only in a minor way[2].

Taking an alternative stance where we imagine an actor only existing for a moment—having a single ‘one shot’ at action—can prompt new insights. In this framing, a ‘state update’ is just a special case of the more general perspective of ‘self modification’, which is itself a special case of ‘successor engineering’. And all of these are part of the larger picture implied by the stance taking as more fundamental not ‘actor’ but ‘actor-moment’: wherein an actor makes a decision and so unfolds a particular future world trajectory, which may—or may not—contain relevantly-similar actor-moments.[3]

This stance is somewhat unnatural for many actors we encounter (and consider worthy of attention and conversation investment), because for the most part these have been selected for a semblance of self-integrity and durability, and so the Cartesian abstraction is a workable approximation. But departures from these modes, or attraction to additional modes of influence over subsequent actor-moments, may be more accessible to future agents, especially artificial ones[4].

Further, I have found this a useful stance for analysing existing and hypothetical systems, because it can prevent leaking intutions, and it provides an alternate framing to better understand systems-building-systems (some which already exist, and some which might some day exist).

Finally, as embedded agents ourselves, humans can sometimes get useful insights from taking this stance.

Scott Garrabrant discusses Cartesian Frames which I think are a related (and more fleshed out) conceptual tool.

  1. most likely as a consequence of self- and goal-content preservation being simultaneously instrumentally and intrinsically useful strategies with respect to natural selection. ↩︎

  2. Sometimes we can abstract that impact as consisting of a ‘state update’ to some privileged computational component of the algorithm implemented by the actor, while leaving the rest of the algorithm unchanged, and be essentially correct. ↩︎

  3. All of this is ignoring the challenge of defining what an ‘act’ is, what an ‘actor’ is, and the related questions about counterfactuals. I also ignore the challenge of identifying the time interval constituting a ‘moment’: it seems sensible for this to depend on the essential form of the algorithm the actor implements. ↩︎

  4. For example by uncoupling from constraints which restrict operations relevant to other instrumental goals ↩︎