Thank you, I had no idea markdown needed to be actively activated in the profile.
oumuamua
Saying “if you are worried, you should get your booster” is not at all equivalent to “if you have gotten your booster, you should not be worried about this virus”.
The first statement permits that one can still be worried after having gotten one’s booster (just a little less), which the second statement does not permit. Therefore, those two statements cannot be logically equivalent.
If you had written “if you have gotten your booster, you should not be as worried about this virus [as before]”, that would have been fine.
I don’t know where you live, but in Europe you can get your antibody level measured for around 50€.
Tin (mostly due to glass-production) and Phosphorous (for fertilizers) are two more example of chemical elements that we are running out of rather quickly. Not completely and irreversibly, but enough to cause insane price-spikes.
Sand, including high-purity silica-sand for chip production are also running low, and aren’t easy to replace.
I’m a bit confused about what exactly you mean, and if I attribute to you a view that you do not hold, please correct me.
I think the assumption that there is one correct population ethics is wrong, and that it’s totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best
This kind of argument has always puzzled me. Your ethical principles are axioms, you define them to be correct, and this should compel you to believe that everybody else’s ethics, insofar as they violate those axioms, are wrong. This is where the “objectivity” comes from. It doesn’t matter what other people’s ethics are, my ethical principles are objectively the way they are, and that is all the objectivity I need.
Imagine there were a group of people who used a set of axioms for counting (Natural Numbers) that violated the Peano axioms in some straightforward way, such that they came to a different conclusion about how much 5+3 is. What do you think the significance of that should be for your mathematical understanding? My guess is “those people are wrong, I don’t care what they believe. I don’t want to needlessly offend them, but that doesn’t change anything about how I view the world, or how we should construct our technological devices.”
Likewise, if a deontologist says “Human challenge trials for covid are wrong, because [deontological reason]”, my reaction to that (I’m a utilitarian) is pretty much the same.
I understand that there are different kinds of people with vastly different preferences for what we should try to optimize for (or whether we should try to optimize for anything at all), but why should that stop me from being persuaded by arguments that honor the axioms I believe in, or why should I consider arguments that rely on axioms I reject?
I realize I’ll never be able to change a deontologist’s mind using utilitarian arguments, and that’s fine. When the longtermists use utilitarian arguments to argue in favor of longtermism, they assume that the recipient is already a utilitarian, or at least that he can be persuaded to become one.
There’s this post, which suggests that Original antigenic sin is unlikely to be a problem. I really hope that’s true.