Your comment about 1e-6 p-doom is not right because we face many other X-risks that developing AGI would reduce.
Otherwise yeah I’m on board with mood of your post.
Personally I really like doing math/philosophy and I have convinced myself that it is necessary to avert doom. At least I’m not accelerating progress much!
Your comment about 1e-6 p-doom is not right because we face many other X-risks that developing AGI would reduce.
Ah you’re right, I wasn’t thinking about that. (Well I don’t think it’s obvious that an aligned AGI would reduce other x-risks, but my guess is it probably would.)
Your comment about 1e-6 p-doom is not right because we face many other X-risks that developing AGI would reduce.
Otherwise yeah I’m on board with mood of your post.
Personally I really like doing math/philosophy and I have convinced myself that it is necessary to avert doom. At least I’m not accelerating progress much!
Ah you’re right, I wasn’t thinking about that. (Well I don’t think it’s obvious that an aligned AGI would reduce other x-risks, but my guess is it probably would.)