(Speculation) Suppose you genuinely try to choose a distribution over minds D that you personally consider cosmically general, and that you don’t try to tailor D so that either “stealing is bad” or “stealing is good” is the prevailing norm amongst them. For each of the distributions D → D’ → D″ etc., I personally suspect with >50% subjective probability that the distribution you choose will yield “stealing is bad” as the Schelling norm, and not “stealing is good”. In particular, I think the cosmic asymmetry I’m positing is probably detectable to you specifically, if you think about it long enough and even-handedly enough without trying to make ‘good’ or ‘bad’ specifically the answer.
I think this would be cool if it was true, but I’m worried that the sequence D->D’->D″ converges to a pretty weird thing, not the “cosmic compromise” you hope for. This sequence might converge to some D^inf which is dominated by “solipsistic attractor”, i.e. an agent who thinks that the cosmic population consists of only themselves, simple agents who hoard measure upon themselves. This is even more likely when you consider that there are risks to thinking hard about which agents exist, so some agents will “lock in” an unsophisticated logical prior.
In short: If X cares about Y, and Y cares about Z, then X cares about Z. (This follows if “X cares about Y” means something like “Y has a low complexity in the solomonoff prior of X” because complexity composes subadditively — you can describe Z to X by first describing Y then describing Z relative to Y). But the converse fails: if X cares about Y and X cares about Z, then Y doesn’t necessarily care about Z.
This sequence might converge to some D^inf which is dominated by “solipsistic attractor”, i.e. an agent who thinks that the cosmic population consists of only themselves, simple agents who hoard measure upon themselves.
I’m curious if what you mean by “solipsism” is a scale-invariantly adaptive norm for civilizations. (See the post section on “Scale invariant adaptations”.)
My sense is that civilizations survive, grow, and reproduce more when each individual is aware, at least behaviorally, that other members of the civilization exist and serve valuable functions, such as by computing or building valuable things that the individual will not compute or build on their own.
there are risks to thinking hard about which agents exist
I’m pretty sure these risks are reduced by the concepts of Schelling goodness and/or acausal normalcy, which I think can help you ground/regularize your thinking. In short: if you lock yourself into an acausal trade relationship with a very specific alien mind that you imagined, that can be bad for you becoming a valued member of a broader coalition of minds, and is thus not very scale-invariantly adaptive, and thus not very Schelling-good amongst humanity or even the cosmos.
Let me spell out a bit more how I think awareness of the idea of Schelling goodness can reduce the risk of thinking about which agents exist…
As far as I know based on other posts/discussions about risks from merely thinking about agents, the risks you’re talking about are roughly of the form:
a) locking in a relationship with a specific alien mind that you think of or read about, and/or
b) over-committing to some self-harming behavior or obsession that you fear the cosmos wants from you.
...rather than
c) thinking broadly about scale-invariantly adaptive pro-tanto moral norms, which don’t override all other norms,
d) remembering that even pro tanto norms can be acknowledged without being obeyed; see the middle section on “Recognition versus endorsement versus adherence”, and
e) remembering that human civilization has its own Schelling answers to moral questions, which might differ from the cosmos, and it’s healthy to keep those in mind, as well as your own morals; see the section on Terrestrial Schelling-goodness.
To put this all a bit more experientially, without assuming we’re talking about you-specifically:
If you feel afraid to notice a norm or think about a broad distribution of agents because it might somehow overtake you in a bad way, then that might be a sign that your mind too quickly equates acknowledgement with adherence somehow, or that you’re thinking of absolute deontological commands rather than pro tanto morals.
To be clear, I’m not saying there are never any risks to thinking about things, especially for persons who have experienced mental health crises brought on by unhealthy thinking patterns.
What I am trying to say in response to your “risks to thinking hard about which agents exist” is more like this: thinking about and using healthy thinking patterns is healthy; thinking about very specific agents or norms and over-committing to them is unhealthy.
I think this would be cool if it was true, but I’m worried that the sequence D->D’->D″ converges to a pretty weird thing, not the “cosmic compromise” you hope for. This sequence might converge to some D^inf which is dominated by “solipsistic attractor”, i.e. an agent who thinks that the cosmic population consists of only themselves, simple agents who hoard measure upon themselves. This is even more likely when you consider that there are risks to thinking hard about which agents exist, so some agents will “lock in” an unsophisticated logical prior.
In short: If X cares about Y, and Y cares about Z, then X cares about Z. (This follows if “X cares about Y” means something like “Y has a low complexity in the solomonoff prior of X” because complexity composes subadditively — you can describe Z to X by first describing Y then describing Z relative to Y). But the converse fails: if X cares about Y and X cares about Z, then Y doesn’t necessarily care about Z.
I’m curious if what you mean by “solipsism” is a scale-invariantly adaptive norm for civilizations. (See the post section on “Scale invariant adaptations”.)
My sense is that civilizations survive, grow, and reproduce more when each individual is aware, at least behaviorally, that other members of the civilization exist and serve valuable functions, such as by computing or building valuable things that the individual will not compute or build on their own.
I’m pretty sure these risks are reduced by the concepts of Schelling goodness and/or acausal normalcy, which I think can help you ground/regularize your thinking. In short: if you lock yourself into an acausal trade relationship with a very specific alien mind that you imagined, that can be bad for you becoming a valued member of a broader coalition of minds, and is thus not very scale-invariantly adaptive, and thus not very Schelling-good amongst humanity or even the cosmos.
Let me spell out a bit more how I think awareness of the idea of Schelling goodness can reduce the risk of thinking about which agents exist…
As far as I know based on other posts/discussions about risks from merely thinking about agents, the risks you’re talking about are roughly of the form:
a) locking in a relationship with a specific alien mind that you think of or read about, and/or
b) over-committing to some self-harming behavior or obsession that you fear the cosmos wants from you.
...rather than
c) thinking broadly about scale-invariantly adaptive pro-tanto moral norms, which don’t override all other norms,
d) remembering that even pro tanto norms can be acknowledged without being obeyed; see the middle section on “Recognition versus endorsement versus adherence”, and
e) remembering that human civilization has its own Schelling answers to moral questions, which might differ from the cosmos, and it’s healthy to keep those in mind, as well as your own morals; see the section on Terrestrial Schelling-goodness.
To put this all a bit more experientially, without assuming we’re talking about you-specifically:
If you feel afraid to notice a norm or think about a broad distribution of agents because it might somehow overtake you in a bad way, then that might be a sign that your mind too quickly equates acknowledgement with adherence somehow, or that you’re thinking of absolute deontological commands rather than pro tanto morals.
To be clear, I’m not saying there are never any risks to thinking about things, especially for persons who have experienced mental health crises brought on by unhealthy thinking patterns.
What I am trying to say in response to your “risks to thinking hard about which agents exist” is more like this: thinking about and using healthy thinking patterns is healthy; thinking about very specific agents or norms and over-committing to them is unhealthy.