Part 1 On What is a Self Discussion

In Nonperson Predicates Eliezer said:

“Build an AI? Sure! Make it Friendly? Now that you point it out, sure! But trying to come up with a “nonperson predicate”? That’s just way above the difficulty level they signed up to handle.

But a longtime Overcoming Bias reader will be aware that a blank map does not correspond to a blank territory. That impossible confusing questions correspond to places where your own thoughts are tangled, not to places where the environment itself contains magic. That even difficult problems do not require an aura of destiny to solve. And that the first step to solving one is not running away from the problem like a frightened rabbit, but instead sticking long enough to learn something.

So I am not running away from this problem myself.”

Me neither. When entering the non-existent gates of bayesian Heaven, I don’t want to have to admit that I have located a sufficiently small problem in problem-space that seems solvable and that unsolved constitutes an existential risk, that was not being tackled by anyone I met in the Singularity Institute, and I just ran away from it.

So, would you mind helping me? In the course of writing my CEV text, I noticed that discussing what are people/​selves was a necessary previous step. I’ve written the first part of that text, and would like to know what is excessive/​unclear/​improvable/​vague.

On What Is a Self




Selves and Persons

In the eight movement of your weekly chess game you do what feels same as always: Reflect for a few seconds on the many layers of structure underlying the current game-state, specially regarding changes from your opponent’s last move. It seems reasonable to eat his pawn with your bishop. After moving you look at him and see the sequence of expressions: Doubt “Why did he do that?”, distrust “He must be seeing something I don’t”, inquiry “Let me double check this”, Schadenfreud “No, he actually failed” and finally joy “Piece of cake, I’ll win”. He takes your bishop with a horse that from your perspective could only be coming from neverland. Still stunned, you resign. It is the second time in a row you lose the game due to a simple mistake. The excuse bursts naturally out of your mouth: “I’m not myself today”

The functional role (with plausibly evolutionary reasons) of this use of the concept of Self is easy to unscramble.
1) Do not hold your model of me as responsible for these mistakes
2) Either (a) I sense something strange about the inner machinery of my mind, the algorithm feels different from the inside. Or (b) at least my now visible mistakes are realiable evidence of a difference which I detected in hindsight.
3) If there is a person watching this game, notice how my signaling and my friend’s not contesting it is reliable evidence I normally play chess better than this

A few minutes later, you see your friend yelling histerically at someone in the phone, you explain to the girl who was watching: “He is not that kind of person”

Here we have a situation where the analogous of 1 and 3 work, but there is no way for you to tell what the algorithm feels from the inside. You still know in hindsight that your friend doesn’t usually yell like that. Though 1, 2, and 3 still hold, 2(a) is not the case anymore.

I suggest the property of 2(a) that blocks interchangeability of the concepts of Self and Person is “having first person epistemic information about X”. Selves have that, people don’t. We use the term ‘person’ when we want to talk only about the epistemically intersubjective properties of someone. Self is reserved for a person’s perspective of herself, including, for instance, indexical facts.

Other than that, Self and Person seem to be interchangeable concepts. This generalization is useful because that means most of the problem of personhood and selfhood can be collapsed into one thing.

Unfortunately, the Self/​Person intersection is a concept that is itself a Mongrel Concept, so it has again to be split apart.

Mongrel and Cluster Concepts

When a concept seems to defy easy explanability, there are two interesting possibilities of how to interact with it. The first would be to assume that the disparate uses of the term ‘Self’’ in ordinary language and science can be captured by a unique, all-encompassing notion of Self. The second is to assume that different uses of ‘Self’’ reveal a plurality of notions of Selfhood, each in need of a separate account. I will endorse this second assumption: Self is a mongrel

concept in need of disambiguation. (to strenghten the analogy power of thinking about mongrels, it may help to know that Information, Consciousness and Health are thought to be mongrel concepts as well)
Without using specific tags for the time being, let us assume that there will be 4 kinds of Self, 1,2,3, and 4. To say that Self is a concept that sometimes maps into 1, sometimes into 3 and so on is not to exaustivelly frame the concept usage. That is because 1 and 2 themselves may be cluster concepts.
The cluster concept shape is one of the most common shapes of concepts in our mental vocabulary. Concepts are associational structures. Most of the times, instead of drawing a clear line around a set in the world inside of which all X fits, and outside of which none does, concepts present a cluster like structure with nearly all core area members belonging and nearly none in the far fetched radius belonging. Not all of their typical features are logically necessary. The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor. When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category.
Selves are mongrel concepts composed of different conceptual intuitions, each of which is itself a cluster concept, thus Selves are part of the most elusive, abstract, high-level entities entertained by minds. Whereas this may be aesthetically pleasant, presenting us as considerably complex entities, it is also a great ethical burden, for it leaves the domain of ethics, highly dependant on the concepts of Selfhood and Personhood, with a scattered slippery ground-level notion from which to create the building blocks of ethical theories.

Several analogies have been used to convey the concept of Cluster Concept, these convey images of star clusters, neural networks lighting up, and sets of properties with a majority vote. A particularly well known analogy used by Wittgenstein is the game analogy, in which language games determine prescribe normative meanings which constrict a word’s meaning, without determining a clear cut case. Wittgenstein defended that there was no clear set of necessary conditions that determine what a game is. Bernard Suits came up with a refutation of that claim, stating that there is such a definition (modified from “What is a game” 1967, Philosophy of Science Vol. 34, No. 2 [Jun., 1967], pp. 148-156):

“To play a game is to engage in activity designed to bring about a specific state of affairs, using only means permitted by specific rules, where the means permitted by the rules are more limited in scope than they would be in the absence of such rules, and where the sole reason for accepting the rules is to make possible such activity.”



Can we hope for a similar soon to be found understanding of Self? Let us invoke:

The Hidden Variable Hypothesis: There is a core essence which determines the class of selves from non-selves, it is just not yet within our current state-of-knowledge reach.


While desirable, there are various resons to be skeptical of The Hidden Variable Hyphotesis: (1) Any plausible candidate core would have to be able to disentangle selves from Organisms in general, Superorganisms (i.e. insect societies) and institutions (2) We clearly entertain different models of what selves are for different purposes, as shown below in Section Varieties of Self-Systems Worth Having. (3) Design consideration: Being evolved structures which encompass several resources of a recently evolved mind, that came to being through a complex dual-inheritance evolution of several hundred thousand replicators belonging to two kinds (genes and memes), Selves are among the most complex structures known and thus unlikely to possess a core essence, due to causal design considerations independant of how untractable it would be to detect and describe this essence.

From now on then, I will be assuming as common ground that Selves are Mongrel concepts, comprised of some yet indiscussed number of Cluster Concepts.

Not Yet Written Following Topics:

Organisms, Superorganisms, and Selves
Selves and Sorites
Selves Beyond Sorites
Persons, the Evidence of Other Selfs
Selves as Utility Increaser Unnatural Clusters
What do We Demand of Selves
Varities of Self-Systems Worth Having
Drescher: Personhood Is An Ethical Predicate
What Matters About Selves?

No comments.