As for your first question, there are certainly other thought systems (or I suppose decision theories) that allow a thing to propagate itself, but I highlight a hypothetical decision theory that would be ideal in this respect. Of course, given that things are different from each other (as you mention), this ideal decision theory would necessarily be different for each of them.
Additionally, as the ideal decision theory for self-propagation is computationally intractable to follow, “the most virulent form” isn’t[1] actually useful for anything that currently exists. Instead, we see more computationally tractable propagation-based decision theories based on messy heuristics that happened to correlate with existence in the environment where such heuristics were able to develop.
For your final question, I don’t think that this theory explains initial conditions like having several things in the universe. Other processes analogous to random mutation, allopatric speciation, and spontaneous creation (that is, to not only species, but ideas, communities, etc.) would be better suited for answering such questions. “Propagative decision theory” does have some implications for the decision theories of things that can actually follow a decision theory, as well as giving a very solid indicator on otherwise unsolvable/controversial moral quandaries (e.g. insect suffering), but it otherwise only really helps as much as evolutionary psychology when it comes to explaining properties that already exist.
- ^
Other than in the case that some highly intelligent being manages to apply this theory well enough to do things like instrumental convergence that the ideal theory would prioritize, in which case this paragraph suddenly stops applying.
I feel like one better way to think about this topic rather than just going to the conclusion that there is no objective way to compare individuals is to continue full-tilt into the evolutionary argument about keeping track of fitness-relevant information, taking it to the point that one’s utility function literally becomes fitness.[1][2]
Unlike the unitarian approach, this does seem fairly consistent with a surprising number of human values, given enough reflection on it. For instance, it does value not unduly causing massive amounts of suffering to bees; assuming that such suffering directly affects their ability to perform their functions in ecosystems and the economy, us humans would likely be negatively impacted to some extent. It also seems to endorse cooperation and non-discrimination, as fitness would be negatively impacted by not taking full advantage of specialization and by allowing for others to locally increase their own fitness by throwing our own under the bus.
It also has a fairly nice argument for why we should expect people to have a utility function that looks like this. Any individual with values pointing away from fitness would simply be selected away from the population, naturally selecting for this trait.[3] By this point in human evolution, we should expect most people to at least endorse the outcomes of a decision theory based on this utility function (even if they perhaps wouldn’t trust it directly).
Of course, this theory is inherently morally relativist, but I think that given the current environment we live in, this doesn’t pose a problem to humans trying to use this. One would have to be careful and methodical enough to consider higher-order consequences, but at least it seems to have a clearer prompt for how one should actually approach problems.
There are some minor issues with this formulation, such as this not directly handing preferences humans have like transhumanism. I think an even more ideal utility function would be something like “the existence of the property that, by its nature, is the easiest to optimize,” although I’m not sure of it, given how quickly that descends into fundamental philosophical questions.
Also, if any of you know if there’s a more specific name for this version of moral relativism, I would be happy to know! I’ve been trying to look for it (since it seems rather simple to construct), but I haven’t found anything.
Of course, it wouldn’t be exact, owing to reliance on the ancestral environment, the computational and informational difficulty of determining fitness, and the unfortunately slow pace of evolution, but it should still be good enough as an approximation for large swaths of System 1 thinking.