“It’s a mistake to act like nobody else is using the same method to make decisions.”—that would be relevant for making her choose to work on the bright rather than the short-term-lucrative dark side IIF she assumed her personal decision pivots a large share of others. Instead, despite your sentence per se not being wrong, what matters is:
Irrespectively of how her ultimate decision tends to correlate with others’, she is not the one singelhandedlypivoting most others’ decisions.
Ceteris paribus she’s herself not the single Pivotal actor in the game.
Hence:
P(doom|she joins the dark profiteering side) ≈ 0.99999 * P(doom|she asketic & give all towards AI safety in the coming few months/years).*
So, However much we’d love it to be different: you may have to be realtively heroic rather than merely human-standard shallowly/slightly altruistic, in order for you to join the good side if dark offers you any reasonably (short-term) sweet temptations.
On the contrary, it would require a really even more strange situation than the one we’re in to make this obviously going the other way round as you seem to still want to imply (if I read your short statements right). Yes, in a world where in a few days or months we’d face with extremely high likelihood doom anyway, so she won’t have even a few months or years to enjoy her cake anyway, and if she’s in addition really relatively high-probability-effect influential over the doom’s arrival or severity, yes, you could reproach her in addition to merely egoism also short-sighted stupidity. But I can barely imagine anyone justifiably holding such strong beliefs about the world plus about their own individual probabilistic impact.
This should not discourage anyone. For a normal person, the unbelievably high stakes in the outcome hopefully still makes you tilt towards trying to not just go for the best paid AI-advancing job but instead for AI safety or so. Is it even an enjoyable live to become rich while taking such risks? If you have a bit of care about your fellow beings hopefully rather not, so choosing even simply ‘for yourself’ wisely may mean not even being interested in having the higher-paid dark job. What a great, rewarding feeling it can surely be to invest yourself for the bright side in such times! But all this hinges on a healthy dose of altruistic feelings inside you; that’s the point, and that’s why it doesn’t help to reproach merely short-sighted stupidity—if we merely do that, we’ll miss how the actual incentives can play out for totally reasonably yet just a bit too egoistic people.
That’s exactly what I mean. You aren’t special. It’s a mistake to act like nobody else is using the same method to make decisions.
“It’s a mistake to act like nobody else is using the same method to make decisions.”—that would be relevant for making her choose to work on the bright rather than the short-term-lucrative dark side IIF she assumed her personal decision pivots a large share of others. Instead, despite your sentence per se not being wrong, what matters is:
Irrespectively of how her ultimate decision tends to correlate with others’, she is not the one singelhandedly pivoting most others’ decisions.
Ceteris paribus she’s herself not the single Pivotal actor in the game.
Hence:
P(doom|she joins the dark profiteering side) ≈ 0.99999 * P(doom|she asketic & give all towards AI safety in the coming few months/years).*
So, However much we’d love it to be different: you may have to be realtively heroic rather than merely human-standard shallowly/slightly altruistic, in order for you to join the good side if dark offers you any reasonably (short-term) sweet temptations.
On the contrary, it would require a really even more strange situation than the one we’re in to make this obviously going the other way round as you seem to still want to imply (if I read your short statements right). Yes, in a world where in a few days or months we’d face with extremely high likelihood doom anyway, so she won’t have even a few months or years to enjoy her cake anyway, and if she’s in addition really relatively high-probability-effect influential over the doom’s arrival or severity, yes, you could reproach her in addition to merely egoism also short-sighted stupidity. But I can barely imagine anyone justifiably holding such strong beliefs about the world plus about their own individual probabilistic impact.
This should not discourage anyone. For a normal person, the unbelievably high stakes in the outcome hopefully still makes you tilt towards trying to not just go for the best paid AI-advancing job but instead for AI safety or so. Is it even an enjoyable live to become rich while taking such risks? If you have a bit of care about your fellow beings hopefully rather not, so choosing even simply ‘for yourself’ wisely may mean not even being interested in having the higher-paid dark job. What a great, rewarding feeling it can surely be to invest yourself for the bright side in such times! But all this hinges on a healthy dose of altruistic feelings inside you; that’s the point, and that’s why it doesn’t help to reproach merely short-sighted stupidity—if we merely do that, we’ll miss how the actual incentives can play out for totally reasonably yet just a bit too egoistic people.