0th Person and 1st Person Logic

Truth values in classical logic have more than one interpretation.

In 0th Person Logic, the truth values are interpreted as True and False.

In 1st Person Logic, the truth values are interpreted as Here and Absent relative to the current reasoner.

Importantly, these are both useful modes of reasoning that can coexist in a logical embedded agent.

This idea is so simple, and has brought me so much clarity that I cannot see how an adequate formal theory of anthropics could avoid it!

Crash Course in Semantics

First, let’s make sure we understand how to connect logic with meaning. Consider classical propositional logic. We set this up formally by defining terms, connectives, and rules for manipulation. Let’s consider one of these terms: . What does this mean? Well, its meaning is not specified yet!

So how do we make it mean something? Of course, we could just say something like “represents the statement that ‘a ball is red’”. But that’s a little unsatisfying, isn’t it? We’re just passing all the hard work of meaning to English.

So let’s imagine that we have to convey the meaning of without using words. We might draw pictures in which a ball is red, and pictures in which there is not a red ball, and say that only the former are . To be completely unambiguous, we would need to consider all the possible pictures, and point out which subset of them are . For formalization purposes, we will say that this set is the meaning of .

There’s much more that can be said about semantics (see, for example, the Highly Advanced Epistemology 101 for Beginners sequence), but this will suffice as a starting point for us.

0th Person Logic

Normally, we think of the meaning of as independent of any observers. Sure, we’re the ones defining and using it, but it’s something everyone can agree on once the meaning has been established. Due to this independence from observers, I’ve termed this way of doing things 0th Person Logic (or 0P-logic).

The elements of a meaning set I’ll call worlds in this case, since each element represents a particular specification of everything in the model. For example, say that we’re only considering states of tiles on a 2x2 grid. Then we could represent each world simply by taking a snapshot of the grid.

Five possible worlds in a 4x4 universe with tiles selected from three colors, and which might contain a robot.

From logic, we also have two judgments. is judged True for a world iff that world is in the meaning of . And False if not. This judgement does not depend on who is observing it; all logical reasoners in the same world will agree.

1st Person Logic

Now let’s consider an observer using logical reasoning. For metaphysical clarity, let’s have it be a simple, hand-coded robot. Fix a set of possible worlds, assign meanings to various symbols, and give it the ability to make, manipulate, and judge propositions built from these.

Let’s give our robot a sensor, one that detects red light. At first glance, this seems completely unproblematic within the framework of 0P-logic.

But consider a world in which there are three robots with red light sensors. How do we give the intuitive meaning of “my sensor sees red”? The obvious thing to try is to look at all the possible worlds, and pick out the ones where the robot’s sensor detects red light. There are three different ways to do this, one for each instance of the robot.

That’s not a problem if our robot knows which robot it is. But without sensory information, the robot doesn’t have any way to know which one it is! There may be both robots which see a red signal, and robots which do not—and nothing in 0P-Logic can resolve this ambiguity for the robot, because this is still the case even if the robot has pinpointed the exact world it’s in!

So statements like “my sensor sees red” aren’t actually picking out subsets of worlds like 0P-statements are. Instead, they’re picking out a different type of thing, which I’ll term an experience.[1] Each specific combination of possible sensor values constitutes a possible experience.

Five possible experiences for a robot with a red sensor, a green sensor, and a blue sensor. Notice how yellow is a possible experience despite our robot’s universe not having any yellow tiles.

For the most part, experiences work in exactly the same way as worlds. We can assign meanings to statements like “my sensor sees red” by picking out subsets of experiences, just as before. It’s still appropriate to reason about these using logic. Semantically, we’re still just doing basic set operations—but now on sets of experiences instead of sets of worlds.

The crucial difference comes from how we interpret the “truth” values. is judged Here for an experience iff that experience is in the meaning of . And Absent if not. This judgment only applies to the robot currently doing the reasoning—even the same robot in the future may come to different judgments about whether is Here. Therefore, I’ve termed this 1st Person Logic (or 1P-logic).

We Can Use Both

In order to reason effectively about its own sensor signals, the robot needs 1P-logic.

In order to communicate effectively about the world with other agents, it needs 0P-logic, since 0P-statements are precisely the ones which are independent of the observer. This includes communicating with itself in the future, i.e. keeping track of external state.

Both modes of reasoning are useful and valid, and I think it’s clear that there’s no fundamental difficulty in building a robot that uses both 0P and 1P reasoning—we can just program it to have and use two logic systems like this. It’s hard to see how we could build an effective embedded agent that gets by without using them in some form.

While 0P-statements and 1P-statements have different types, that doesn’t mean they are separate magisteria or anything like that. From an experience, we learn something about the objective world. From a model of the world, we infer what sort of experiences are possible within it.[2]

As an example of the interplay between the 0P and 1P perspectives, consider adding a blue light sensor to our robot. The robot has a completely novel experience when it first gets activated! If its world model doesn’t account for that already, it will have to extend it somehow. As it explores the world, it will learn associations with this new sense, such as it being commonly present in the sky. And as it studies light further, it may realize there is an entire spectrum, and be able to design a new light sensor that detects green light. It will then anticipate another completely novel experience once it has attached the green sensor to itself and it has been activated.

This interplay allows for a richer sense of meaning than either perspective alone; blue is not just the output of an arbitrary new sensor, it is associated with particular things already present in the robot’s ontology.

Further Exploration

I hope this has persuaded you that the 0P and 1P distinction is a core concept in anthropics, one that will provide much clarity in future discussions and will hopefully lead to a full formalization of anthropics. I’ll finish by sketching some interesting directions it can be taken.

One important consequence is that it justifies having two separate kinds of Bayesian probabilities: 0P-probabilities over worlds, and 1P-probabilities over experiences. Since probability can be seen as an extension of propositional logic, it’s unavoidable to get both kinds if we accept these two kinds of logic. Additionally, we can see that our robot is capable of having both, with both 0P-probabilities and 1P-probabilities being subjective in the sense that they depend on the robot’s own best models and evidence.

From this, we get a nice potential explanation to the Sleeping Beauty paradox: 12 is the 0P-probability, and 13 is the 1P-probability (of slightly different statements: “the coin in fact landed heads”, “my sensors will see the heads side of the coin”). This could also explain why both intuitions are so strong.

It’s worth noting that no reference to preferences has yet been made. That’s interesting because it suggests that there are both 0P-preferences and 1P-preferences. That intuitively makes sense, since I do care about both the actual state of the world, and what kind of experiences I’m having.

Additionally, this gives a simple resolution to Mary’s Room. Mary has studied the qualia of ‘red’ all her life (gaining 0P-knowledge), but has never left her grayscale room. When she leaves it and sees red for the very first time, she does not gain any 0P-knowledge, but she does gain 1P-knowledge. Notice that there is no need to invoke anything epiphenomenal or otherwise non-material to explain this, as we do not need such things in order to construct a robot capable of reasoning with both 0P and 1P logic.[3]

Finally, this distinction may help clarify some confusing aspects of quantum mechanics (which was the original inspiration, actually). Born probabilities are 1P-probabilities, while the rest of quantum mechanics is a 0P-theory.


Special thanks to Alex Dewey, Nisan Stiennon and Claude 3 for their feedback on this essay, to Alexander Gietelink Oldenziel and Nisan for many insightful discussions while I was developing these ideas, and to Alex Zhu for his encouragement. In this age of AI text, I’ll also clarify that everything here was written by myself.

The idea of using these two different interpretations of logic together like this is original to me, as far as I am aware (Claude 3 said it thought so too FWIW). However, there have been similar ideas, for example Scott Garrabrant’s post about logical and indexical uncertainty, or Kaplan’s theory of indexicals.

  1. ^

    I’m calling these experiences because that is a word that mostly conveys the right intuition, but these are much more general than human Experiences and apply equally well to a simple (non-AI) robot’s sensor values.

  2. ^

    More specifically, I expect there to be an adjoint functor pair of some sort between them (under the intuition that an adjoint functor pair gives you the “best” way to cast between different types).

  3. ^

    I’m not claiming that this explains qualia or even what they are, just that whatever they are, they are something on the 1P side of things.