> That is, in the worst case they could just behave exactly like a pure replicator. And they could do this without actually surrendering their values. So any argument of the form “there is no way anything that cares about us can survive in Malthusian equilibrium” seems false.
I think it’s quite plausible that this is actually not possible, i.e., either at technological maturity or in the runup to it, transmitting values like caring about humans into the next generation of agents is actually difficult or costly enough that such agents are outcompeted and disappear.
Another concern I have about Malthusian scenarios (beyond “deadweight loss” in your post) is that there will be an astronomical number of agents (potential moral patients) with little surplus to spend on things aside from survival and reproduction. What if they have net negative lives, and either negative utilitarianism is true, or there isn’t enough overall surplus to make the universe net positive?
> That is, in the worst case they could just behave exactly like a pure replicator. And they could do this without actually surrendering their values. So any argument of the form “there is no way anything that cares about us can survive in Malthusian equilibrium” seems false.
I think it’s quite plausible that this is actually not possible, i.e., either at technological maturity or in the runup to it, transmitting values like caring about humans into the next generation of agents is actually difficult or costly enough that such agents are outcompeted and disappear.
Another concern I have about Malthusian scenarios (beyond “deadweight loss” in your post) is that there will be an astronomical number of agents (potential moral patients) with little surplus to spend on things aside from survival and reproduction. What if they have net negative lives, and either negative utilitarianism is true, or there isn’t enough overall surplus to make the universe net positive?