Is that purely because they think AI-driven-extinction is almost certain or is it a combination of that and “even if we survive we probably won’t need retirement money anyway”?
This is becoming less and less about the actual OP, but I really do still want to ask—do you think it is a near-certainty though? (Like >99% chance of AI killing us all soon I mean)
Eh, it’s sort of hard to talk about the overall future? That is roughly my confidence level of doom contingent on, like, Anthropic doing RSI starting in the next year. But that happening feels like something that is more like a 10-20% chance, and it’s harder to estimate what the doom probabilities will look like as we get further and further into the future (in part because something will mean we don’t get RSI soon, and it’s not obvious how that impacts further development).
But wait, wouldn’t doing things like saving for retirement still make sense? Or is p(we all die) really that high
MIRI doesn’t offer its employees a retirement plan. (OpenAI did, and this was viewed with some consternation by the more AGI-pilled employees.)
I think “the singularity is my retirement plan” is not a crazy position; it is mostly irrelevant to my personal financial situation, tho.
Is that purely because they think AI-driven-extinction is almost certain or is it a combination of that and “even if we survive we probably won’t need retirement money anyway”?
I think it began as the latter and became the former. (Like, when I worked there the situation seemed rosier than it does today.)
This is becoming less and less about the actual OP, but I really do still want to ask—do you think it is a near-certainty though? (Like >99% chance of AI killing us all soon I mean)
Eh, it’s sort of hard to talk about the overall future? That is roughly my confidence level of doom contingent on, like, Anthropic doing RSI starting in the next year. But that happening feels like something that is more like a 10-20% chance, and it’s harder to estimate what the doom probabilities will look like as we get further and further into the future (in part because something will mean we don’t get RSI soon, and it’s not obvious how that impacts further development).