I’m a bit confused about what exactly you mean, and if I attribute to you a view that you do not hold, please correct me.
I think the assumption that there is one correct population ethics is wrong, and that it’s totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best
This kind of argument has always puzzled me. Your ethical principles are axioms, you define them to be correct, and this should compel you to believe that everybody else’s ethics, insofar as they violate those axioms, are wrong. This is where the “objectivity” comes from. It doesn’t matter what other people’s ethics are, my ethical principles are objectively the way they are, and that is all the objectivity I need.
Imagine there were a group of people who used a set of axioms for counting (Natural Numbers) that violated the Peano axioms in some straightforward way, such that they came to a different conclusion about how much 5+3 is. What do you think the significance of that should be for your mathematical understanding? My guess is “those people are wrong, I don’t care what they believe. I don’t want to needlessly offend them, but that doesn’t change anything about how I view the world, or how we should construct our technological devices.”
Likewise, if a deontologist says “Human challenge trials for covid are wrong, because [deontological reason]”, my reaction to that (I’m a utilitarian) is pretty much the same.
I understand that there are different kinds of people with vastly different preferences for what we should try to optimize for (or whether we should try to optimize for anything at all), but why should that stop me from being persuaded by arguments that honor the axioms I believe in, or why should I consider arguments that rely on axioms I reject?
I realize I’ll never be able to change a deontologist’s mind using utilitarian arguments, and that’s fine. When the longtermists use utilitarian arguments to argue in favor of longtermism, they assume that the recipient is already a utilitarian, or at least that he can be persuaded to become one.
Saying “if you are worried, you should get your booster” is not at all equivalent to “if you have gotten your booster, you should not be worried about this virus”.
The first statement permits that one can still be worried after having gotten one’s booster (just a little less), which the second statement does not permit. Therefore, those two statements cannot be logically equivalent.
If you had written “if you have gotten your booster, you should not be as worried about this virus [as before]”, that would have been fine.