My intuition that there’s something “real” about morality seems to come from a sense that the consensus process would be expected to arise naturally across a wide variety of initial universe configurations. the more social a species is, the more they seem to have a sense of doing-well-by other beings; the veil of ignorance seems intuitive in some sense to them. It’s not that there’s some thing outside us, it’s that, if I’m barking up the right tree here, our beliefs and behaviors are a consequence of a simple pattern in evolutionary processes that generates things like us fairly consistently.
If we imagine a CEV process that can be run on most humans without producing highly noisy extrapolations, and where we think it’s in some sense a reasonable CEV process; then for me, I would try to think about the originating process that generated vaguely cosmopolitan moralities, and look for regularities that would be expected to generate it across universes. Call these regularities the “interdimensional council of cosmopolitanisms”. I’d want to study those regularities and see if there are structures that inspire me. call that “visiting the interdimensional council of cosmopolitanisms”. if I do this, then somewhere in that space, I’d find that there’s a universe configuration that produces me, and produces me considering this interdimensional council. It’d be a sort of LDT-ish thing to do, but importantly this is happening before I decide what I want, not as a rational bargaining process to trade with other beings.
But ultimately, I’d see my morality as a choice I make. I make that choice after reviewing what choices I could have made. I’d need something like a reasonable understanding of self fulfilling prophecies and decision theories (I currently am partial to intuitions I get from FixDT), so as to not accidentally choose something purely by self-fulfilling prophecy. I’d look at this “real”ness to morality as being the realness of the fact that evolution produces beings with cosmopolitan-morality preferences.
It’s not clear to me that morality wins “by default”, however. I have an intuition, inspired by the rock-paper-scissors cycle in game theory evolutionary prisoner’s dillema experiments (note: citation is not optimal, I’ve asked a claude research agent to find me the papers that show the conditions for this cycle more thoroughly and will edit when I get the results), that defect-ish moralities can win, and that participating in the maximum-sized cooperation group is a choice. the realness is the fact that the cosmopolitan, open-borders, scale-free-tit-for-tat cooperation group can emerge, not that it’s obligated by rationality a priori to prefer to be in it. What I want is to increase the size of that cooperation group, avoid it losing the scale-free property and forming into either, isolationist “don’t-cooperate-with-cosmopolitan-morality-outsiders” bubbles, or centralized bubbles; and ensure that it thoroughly covers the existing moral patients. I also want to guarantee, if possible, that it’s robust against cooperating with moralities that defect in return.
I suspect that splitting LDT and morality like this is a bug arising from being stuck with EUT, and that a justified scale-free agency theory would not have this bug, and would give me a better basis for arguing for 1. wanting to be in the maximum-sized eigenmorality cluster, the one that conserves all the agents that cooperate with it and tries to conserve as many of them as possible 2. that we can decide for that to be a dominant structure in our causal cone by defending it strongly.
My intuition that there’s something “real” about morality seems to come from a sense that the consensus process would be expected to arise naturally across a wide variety of initial universe configurations. the more social a species is, the more they seem to have a sense of doing-well-by other beings; the veil of ignorance seems intuitive in some sense to them. It’s not that there’s some thing outside us, it’s that, if I’m barking up the right tree here, our beliefs and behaviors are a consequence of a simple pattern in evolutionary processes that generates things like us fairly consistently.
If we imagine a CEV process that can be run on most humans without producing highly noisy extrapolations, and where we think it’s in some sense a reasonable CEV process; then for me, I would try to think about the originating process that generated vaguely cosmopolitan moralities, and look for regularities that would be expected to generate it across universes. Call these regularities the “interdimensional council of cosmopolitanisms”. I’d want to study those regularities and see if there are structures that inspire me. call that “visiting the interdimensional council of cosmopolitanisms”. if I do this, then somewhere in that space, I’d find that there’s a universe configuration that produces me, and produces me considering this interdimensional council. It’d be a sort of LDT-ish thing to do, but importantly this is happening before I decide what I want, not as a rational bargaining process to trade with other beings.
But ultimately, I’d see my morality as a choice I make. I make that choice after reviewing what choices I could have made. I’d need something like a reasonable understanding of self fulfilling prophecies and decision theories (I currently am partial to intuitions I get from FixDT), so as to not accidentally choose something purely by self-fulfilling prophecy. I’d look at this “real”ness to morality as being the realness of the fact that evolution produces beings with cosmopolitan-morality preferences.
It’s not clear to me that morality wins “by default”, however. I have an intuition, inspired by the rock-paper-scissors cycle in game theory evolutionary prisoner’s dillema experiments (note: citation is not optimal, I’ve asked a claude research agent to find me the papers that show the conditions for this cycle more thoroughly and will edit when I get the results), that defect-ish moralities can win, and that participating in the maximum-sized cooperation group is a choice. the realness is the fact that the cosmopolitan, open-borders, scale-free-tit-for-tat cooperation group can emerge, not that it’s obligated by rationality a priori to prefer to be in it. What I want is to increase the size of that cooperation group, avoid it losing the scale-free property and forming into either, isolationist “don’t-cooperate-with-cosmopolitan-morality-outsiders” bubbles, or centralized bubbles; and ensure that it thoroughly covers the existing moral patients. I also want to guarantee, if possible, that it’s robust against cooperating with moralities that defect in return.
see also eigenmorality as a hunch source.
I suspect that splitting LDT and morality like this is a bug arising from being stuck with EUT, and that a justified scale-free agency theory would not have this bug, and would give me a better basis for arguing for 1. wanting to be in the maximum-sized eigenmorality cluster, the one that conserves all the agents that cooperate with it and tries to conserve as many of them as possible 2. that we can decide for that to be a dominant structure in our causal cone by defending it strongly.