Anthropics and Embedded Agency

I am not very familiar with the AI field, which is probably why I have only recently come across Scott Garrabrant’s posts about embedded agency. To my surprise, I found it connected with anthropics—something I have been interested in for a long time. In particular, my solution to the paradoxes: Perspective-Based Reasoning (PBR) is highly similar to the non-embedded model (dualist agency). I feel this connection between the two fields deserves more attention.

Perspective-Based Reasoning

My purposed solution to the anthropic paradoxes (PBR) can be summarized as the following.

1. Perspective-based reasoning is more fundamental than objective reasoning.

It just means thinking from a perspective (e.g. your first-person perspective) is more basic than thinking perspective-independently. This is opposite to the common notion that objective reasoning is fundamental whereas perspectives and indexicals are additional information.

2. Perspective is primitive.

Perspective is a reasoning starting point. Just like an axiom, it can only be regarded as given, not further analyzed. This is why in anthropics, self-locating probabilities such as “I am this particular person” in Doomsday Argument or “Today is Monday” in Sleeping Beauty Problem are meaningless.

3. Reasoning from different perspectives shouldn’t mix.

Like statements derived from different axiomatic systems, reasoning from different perspectives should not mix. So switching to a different perspective halfway in reasoning could lead to inconsistency. It is also the reason for perspective disagreement in anthropics. (something all halfers have to recognize)

This idea leads to a double-halfer position on the sleeping beauty problem. It suggest there is no reference class for the agent itself (or the moment now). PBR rejects the doomsday argument, simulation argument, and arguments related to fine-tuned universe.

Similarities with Dualist Agency

1. A dualistic agent exists outside of its environment. It primitively carves the world into “agent” and “environment” with an input-output relationship. This is mirrored in PBR regarding the perspective (self) as primitively defined. The input-output relation in the AI context is mirrored by our perception and subjective experience as highlighted here. It is why there is no valid reference class for self

2. Dualists agents don’t tend to model themselves. Figuratively they “can treat himself as an unchanging indivisible atom”. PBR does not self-analyze because the perspective center is considered primitively, and why “self-locating probabilities” are invalid. It is further reflected by PBR supports the Copenhagen interpretation of quantum mechanics, in which the “observer” is not physically explainable by reductionism is to be expected.

3. Dualistic agents assume a particular way of carving up the world and don’t allow for switching between different carvings. (Garrabrant explicitly expressed this in a different post) PBR suggests logic from different perspectives should not be mixed (i.e. switching perspective halfway is not allowed) as it would lead to inconsistencies.

Bottomline

To say the very least, agency problems in AI and anthropic paradoxes are deeply connected. Here’s another example, one open problem of embedded agency is about worlds that include multiple copies of the agent. The same problem repeatedly appears in anthropic arguments, which itself is an extension of the reference class problem. Reading about the other field could bring fresh ideas to the table.

Personally, given I support PBR, I think the traditional dualist model should not be dismissed so easily. Many of its supposed shortcomings can be explained. (e.g. dualistic agency is idealistically egocentric, it cannot self-analyze, etc. I want to keep this post short, the longer version with my defence of the dualistic agency can be found on my website here. ).