Is there a reason why it wouldn’t be strongly correlated?
Your “serious” modifier sounds to me like you’re envisioning the consensus among masses to change while smart people are more sober. I was largely assuming that, in the worlds where Aubrey’s prediction is true, actual life expectancy does, in fact, increase along with the awareness shift. Note that it’s expectancy rather than actual life span.
Pensions might be a good pointer.
Won’t there be more indirect consequences? If we suddenly expect people to live longer, even if it the technology will take a while to be around, wouldn’t that benefit some companies relative to others?
A while ago ago I asked LW about their take on Aubrey’s prediction (forecasting a massive and sudden shift in public awareness on the feasibility of anti-aging technology) and about how, if it is accurate, one could make money out of knowing that.
I got several answers on the first question (summary: people aren’t buying it) but none on the second. Is there any way to bet on this? I know very little about pubic trading, but it feels like such a big event ought to affect the stock market somehow. Am I wrong, or is it too difficult to estimate how?
Right, so you set the standard higher than simply talking about it. That wasn’t clear to me from your previous post but it makes sense.
If you post it anyway (maybe a top-level post for visibility?), I’ll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques.
I think you’re ignoring the harms from posting something uncivil. Civility is an extremely important norm. I would not support something that is directly insulting, even if it is an important critique.
However, I did strong-upvote this comment (meaning sirjackholland’s comment on this post) and I applaud them both for not publishing their original critique and for expressing their position anyway.
I might be misunderstanding you, but doesn’t Elizabeth explicitly state that this discussion did take place here?
I reject the framing of truth vs winning. Instead, I propose that only winning matters. Truth has no terminal value.
It does, however, have enormous instrumental value. For this reason, I support the norm always to tell the truth even if it appears as if the consequences are net negative – with the reasoning that they probably aren’t, at least not in expectation. This is so in part because truth feels extremely important to many of us, which means that having such a norm in place is highly beneficial.
The other response is much more interesting, arguing that appeals to consequences are generally bad, and that meta-level considerations mean we should generally speak the truth even if the immediate consequences are bad. I find this really interesting because it is ultimately about infohazards: those rare cases where there is a conflict between epistemic and instrumental rationality. Typically, we believe that having more truth (via epistemic rationality) is a positive trait that allows you to “win” more (thus aligning with instrumental rationality). But when more truth becomes harmful, which do we preference: truth, or winning?
The keyword here is “immediate” [emphasis added], which you drop by the end. I agree with the first part of this paragraph but disagree with the final sentence. Instead, my question would have been, “but when more truth appears to become harmful, how do we balance the immediate consequences against the long term/fuzzy/uncertain but potentially enormous consequences of violating the truth norm?”
I read jimrandomh’s comment as reasoning from this framework (rather than arguing that we should assign truth terminal value), but this might be confirmation bias.
Specific to the thread: I bought copper tape, a pulse oximeter, and I’m much more careful with packages.
Partially due to LW but not that thread in particular: I’ve stocked up food and am getting rid of the habit of constantly touching my face
I’m wondering how well viruses stick to/survive on clothing. In trying to avoid touching my face, I’ve occasionally resorted to using sleeves of my hoodie instead – which I also use to touch surfaces like door knobs or light switches. Should I use the elbow for those instead?
I see this problem all the time with regard to things that can be classified as “childish”. Beside pandemics, the most striking examples in my mind are risk of nuclear war and risk of AI, but I expect there are lots of others. I don’t exactly think of it as signaling wisdom, but as signaling being a serious-person-who-undestands-that-unserious-problems-are-low-status (the difference being that it doesn’t necessitate thinking of yourself as particularly “smart” or “wise”).
You can check out my attempt on Metaculus to capture the essence of his claim, though it’s debatable whether I succeeded. Right now Metaculus says there’s a 75% chance of something culturally significant happening in anti-aging research in the 2020s.
This is good. However, you did set the bar for positive resolution a lot lower than I would have based on what he claimed this time around.
My bad for being unclear. We’re strictly talking about the perception of it being a fact. I did not intend to include any factual claim into my paraphrasing.
I added an “alleged” in there. I think it’s a pretty safe bet that Aubrey considers it a fact.
I don’t think there is a UDT-idea that prescribes cooperating with non-UDT agents. UDT is sufficiently formalized that we know what happens if a UDT agent plays a prisoner’s dilemma with a CDT agent and both parties know each other’s algorithm/code: they both defect.
If you want to cooperate out of altruism, I think the solution is to model the game differently. The outputs that go into the game theory model should be whatever your utility function says, not your well-being. So if you value the other person’s well-being as much as yours, then you don’t have a prisoner’s dilemma because cooperate/defect is a better outcome for you than defect/defect.
by buying food they are limiting other’s chances to buy it.
But they’re only doing that if there will, in fact, be a supply shortage. That was my initial point – it depends on how many other people will stockpile food.
Thanks; fixed & will try to remember.
If we use some variant of UDT, the same line of reasoning is experienced by many other minds and we should reason as if we have causal power over all these minds.
As I understand UDT, this isn’t right. UDT 1.1 chooses an input-output mapping that maximizes expected utility. Even assuming that all people who read LW run UDT 1.1, this choice still only determines the input-output behavior of a couple of programs (humans). The outputs of programs that don’t depend on our outputs because those programs aren’t running UDT are held constant. Therefore, if you formalized this problem, UDT’s output could be “stockpile food” even if [every human doing that] would lead to a disaster.
I think “pretend as if everyone runs UDT” was neither intentioned by Wei Dei nor is it a good idea.Differently put, UDT agents don’t cooperate in a one-shot prisoner’s dilemma if they play vs. CDT agents.
Also: if a couple of people stockpile food, but most people don’t, that seems like a preferable outcome to everyone doing nothing (provided stockpiling food is worth doing). It means some get to prepare, and the food market isn’t significantly affected. So this particular situation actually doesn’t seem to be isomorphic to the prisoner’s dilemma (if modeled via game theory).
The advice of this post seems to be advice on the margin (i.e., assuming everything else is held constant), which seems reasonable given that this one post won’t change collective behavior by much.
So the question isn’t “what happens if everyone stockpiles food?” but rather, “do we expect enough people to stockpile food that stockpiling more food will lead to bad consequences?”. I don’t know the answer to that one.