I translate this general form of argument as, “Yes, you’re right, but there’s nothing you can do about it.” Which is to say, the existential risk is so inevitable that any well-defined solution will be seen to have no chance of impacting the final result.
In which case, I borrow thoughts from Peter Thiel, who states that any world where humans no longer exist might as well be ignored as a possibility for investment, so invest in the avenues which might have a chance of continuing.
(Though you’d likely want to make a weaker form of Thiel’s argument if possible, as it’s not been convincingly demonstrated that a scenario with a non-Friendly superintelligence is necessarily or likely a scenario where humans no longer exist. As a special case, there are some who are especially worried about “hell worlds”—if pushing for Friendliness increases the probability of hell worlds, as has sometimes been argued, then it’s not clear that you should discount such possible futures. More generally I have a heuristic that this “discount and then renormalize” approach to strategizing is generally not a good one; in my personal experience it’s proven a bad idea to assume, even provisionally, that there are scenarios that I can’t affect.)
“Many acquire the serenity to accept what they cannot change, only to find the ‘cannot change’ is temporary and the serenity is permanent.” — Steven Kaas
“Many acquire the serenity to accept what they cannot change, only to find the ‘cannot change’ is temporary and the serenity is permanent.” — Steven Kaas
The Forbidden Toy is the classic. Google scholar on “forbidden toy” provides more on the subject, with elaboration and alternate hypothesis testing and whatnot.
I couldn’t immediately remember the experience that led me to strongly believe it, but luckily the answer came to me in a dream. Turns out it’s just personal stuff having to do with a past relationship that I cared a lot about. There are other concrete examples but they probably don’t affect my decision calculus nearly as much in practice. (Fun fact: I learned many of my rationality skillz via a few years in high school dating a really depressed girl.)
And last, the rending pain of re-enactment
Of all that you have done, and been; the shame
Of motives late revealed, and the awareness
Of things ill done and done to others' harm
Which once you took for exercise of virtue.
Then fools' approval stings, and honour stains.
From wrong to wrong the exasperated spirit
Proceeds, unless restored by that refining fire
Where you must move in measure, like a dancer.
It’s “Even assuming you’re right, there’s nothing you can do about it. And that’s not to say something couldn’t be done about it from another approach.” Holden made it clear that he doesn’t think SI is right.
I translate this general form of argument as, “Yes, you’re right, but there’s nothing you can do about it.” Which is to say, the existential risk is so inevitable that any well-defined solution will be seen to have no chance of impacting the final result.
In which case, I borrow thoughts from Peter Thiel, who states that any world where humans no longer exist might as well be ignored as a possibility for investment, so invest in the avenues which might have a chance of continuing.
(Though you’d likely want to make a weaker form of Thiel’s argument if possible, as it’s not been convincingly demonstrated that a scenario with a non-Friendly superintelligence is necessarily or likely a scenario where humans no longer exist. As a special case, there are some who are especially worried about “hell worlds”—if pushing for Friendliness increases the probability of hell worlds, as has sometimes been argued, then it’s not clear that you should discount such possible futures. More generally I have a heuristic that this “discount and then renormalize” approach to strategizing is generally not a good one; in my personal experience it’s proven a bad idea to assume, even provisionally, that there are scenarios that I can’t affect.)
“Many acquire the serenity to accept what they cannot change, only to find the ‘cannot change’ is temporary and the serenity is permanent.” — Steven Kaas
Would you mind sharing a concrete example?
The Forbidden Toy is the classic. Google scholar on “forbidden toy” provides more on the subject, with elaboration and alternate hypothesis testing and whatnot.
Thanks.
I couldn’t immediately remember the experience that led me to strongly believe it, but luckily the answer came to me in a dream. Turns out it’s just personal stuff having to do with a past relationship that I cared a lot about. There are other concrete examples but they probably don’t affect my decision calculus nearly as much in practice. (Fun fact: I learned many of my rationality skillz via a few years in high school dating a really depressed girl.)
— T. S. Eliot, Little Gidding
It’s “Even assuming you’re right, there’s nothing you can do about it. And that’s not to say something couldn’t be done about it from another approach.” Holden made it clear that he doesn’t think SI is right.