I think this is clearly a strawman. I’d also argue individual actors can have a much bigger impact on something like AI safety relative to the trajectory of climate change.
The actual post in question is not what I would classify as “maximally doomerish” or resigned at all, and I think it’s overly dismissive to turn the conversation towards “well you shouldn’t be maximally doomerish”.
I mean, sure, maybe maximal doomerish is not exactly the right term for me to use. but there’s definitely a tendency for people to be worried that being insufficiently emotionally scared and worried will make them complacent. to be clear, this is not about your epistemic p(doom); I happen to think AGI killing everyone is more likely than not. but really feeling this deeply emotionally is very counterproductive for my actually reducing x-risk.
To clarify, the original post was not meant to be resigned or maximally doomerish. I intend to win in worlds where winning is possible, and I was trying to get across the feeling of doing that while recognizing things are likely(?) to not be okay.
I agree that being in the daily, fight-or-flight, anxiety-inducing super-emergency mode of thought that thinking about x-risk can induce is very bad. But it’s important to note you can internalize the risks and probable futures very deeply, including emotionally, while still being productive, happy, sane, etc. High distaste for drama, forgiving yourself and picking yourself up, etc.
This is what I was trying to gesture at, and I think what Boaz is aiming at as well.
I think we are in agreement! It is definitely easier for me, given that I believe things are likely to be OK, but I still assign non-trivial likelihood to the possiblity they will not. But regardless of what you believe is more likely, I agree you should both (a) do what is feasible for you to have positive impact in the domains you can influence, and (b) keep being productive, happy, and sane without obsessing on factors you do not control.
I think this is clearly a strawman. I’d also argue individual actors can have a much bigger impact on something like AI safety relative to the trajectory of climate change.
The actual post in question is not what I would classify as “maximally doomerish” or resigned at all, and I think it’s overly dismissive to turn the conversation towards “well you shouldn’t be maximally doomerish”.
I mean, sure, maybe maximal doomerish is not exactly the right term for me to use. but there’s definitely a tendency for people to be worried that being insufficiently emotionally scared and worried will make them complacent. to be clear, this is not about your epistemic p(doom); I happen to think AGI killing everyone is more likely than not. but really feeling this deeply emotionally is very counterproductive for my actually reducing x-risk.
To clarify, the original post was not meant to be resigned or maximally doomerish. I intend to win in worlds where winning is possible, and I was trying to get across the feeling of doing that while recognizing things are likely(?) to not be okay.
I agree that being in the daily, fight-or-flight, anxiety-inducing super-emergency mode of thought that thinking about x-risk can induce is very bad. But it’s important to note you can internalize the risks and probable futures very deeply, including emotionally, while still being productive, happy, sane, etc. High distaste for drama, forgiving yourself and picking yourself up, etc.
This is what I was trying to gesture at, and I think what Boaz is aiming at as well.
I think we are in agreement! It is definitely easier for me, given that I believe things are likely to be OK, but I still assign non-trivial likelihood to the possiblity they will not. But regardless of what you believe is more likely, I agree you should both (a) do what is feasible for you to have positive impact in the domains you can influence, and (b) keep being productive, happy, and sane without obsessing on factors you do not control.