with respect to the climate change example, it seems instructive to observe the climate people who feel an urge to be maximally doomerish because anything less would be complacent, and see if they are actually better at preventing climate change. I’m not very deeply embedded in such communities, so I don’t have a very good sense. but I get the vibe that they are in fact less effective towards their own goals: they are too prone to dismiss actual progress, lose a lot of productivity to emotional distress, are more susceptible to totalizing “david and goliath” ideological frameworks, descend into purity spiral infighting, etc. obviously, the facts of AI are different, but this still seems instructive as a case study to look deeper into.
It does happen to be the case that thinking that climate change has much of a chance of being existentially bad is just wrong. Thinking that AI is existentially bad is right (at least according to me). A major confounder to address is that conditioning on a major false belief will of course be indicative of being worse at pursuing your goals than conditioning on a major true belief.
sure, I agree with the object level claim, hence why I say the facts of AI are different. it sounds like you’re saying that because climate change is not that existential, if we condition on people believing that climate change is existential, then this is confounded by people also being worse as believing true things. this is definitely an effect. however, I think there is an ameliorating factor: as an emotional stance, existential fear doesn’t have to literally be induced by human extinction; while the nuance between different levels of catastrophe matters a lot consequentially, for most people their emotional ability to feel it even harder caps out much lower than x-risk.
of course, you can still argue that given AGI is bigger, then we should still be more worried about it. but I think rejecting “AGI is likely to kill everyone” indicts one’s epistemics a lot less than accepting “climate change is likely to kill everyone” does. so this makes the confounder smaller.
I think this is clearly a strawman. I’d also argue individual actors can have a much bigger impact on something like AI safety relative to the trajectory of climate change.
The actual post in question is not what I would classify as “maximally doomerish” or resigned at all, and I think it’s overly dismissive to turn the conversation towards “well you shouldn’t be maximally doomerish”.
I mean, sure, maybe maximal doomerish is not exactly the right term for me to use. but there’s definitely a tendency for people to be worried that being insufficiently emotionally scared and worried will make them complacent. to be clear, this is not about your epistemic p(doom); I happen to think AGI killing everyone is more likely than not. but really feeling this deeply emotionally is very counterproductive for my actually reducing x-risk.
To clarify, the original post was not meant to be resigned or maximally doomerish. I intend to win in worlds where winning is possible, and I was trying to get across the feeling of doing that while recognizing things are likely(?) to not be okay.
I agree that being in the daily, fight-or-flight, anxiety-inducing super-emergency mode of thought that thinking about x-risk can induce is very bad. But it’s important to note you can internalize the risks and probable futures very deeply, including emotionally, while still being productive, happy, sane, etc. High distaste for drama, forgiving yourself and picking yourself up, etc.
This is what I was trying to gesture at, and I think what Boaz is aiming at as well.
I think we are in agreement! It is definitely easier for me, given that I believe things are likely to be OK, but I still assign non-trivial likelihood to the possiblity they will not. But regardless of what you believe is more likely, I agree you should both (a) do what is feasible for you to have positive impact in the domains you can influence, and (b) keep being productive, happy, and sane without obsessing on factors you do not control.
with respect to the climate change example, it seems instructive to observe the climate people who feel an urge to be maximally doomerish because anything less would be complacent, and see if they are actually better at preventing climate change. I’m not very deeply embedded in such communities, so I don’t have a very good sense. but I get the vibe that they are in fact less effective towards their own goals: they are too prone to dismiss actual progress, lose a lot of productivity to emotional distress, are more susceptible to totalizing “david and goliath” ideological frameworks, descend into purity spiral infighting, etc. obviously, the facts of AI are different, but this still seems instructive as a case study to look deeper into.
It does happen to be the case that thinking that climate change has much of a chance of being existentially bad is just wrong. Thinking that AI is existentially bad is right (at least according to me). A major confounder to address is that conditioning on a major false belief will of course be indicative of being worse at pursuing your goals than conditioning on a major true belief.
sure, I agree with the object level claim, hence why I say the facts of AI are different. it sounds like you’re saying that because climate change is not that existential, if we condition on people believing that climate change is existential, then this is confounded by people also being worse as believing true things. this is definitely an effect. however, I think there is an ameliorating factor: as an emotional stance, existential fear doesn’t have to literally be induced by human extinction; while the nuance between different levels of catastrophe matters a lot consequentially, for most people their emotional ability to feel it even harder caps out much lower than x-risk.
of course, you can still argue that given AGI is bigger, then we should still be more worried about it. but I think rejecting “AGI is likely to kill everyone” indicts one’s epistemics a lot less than accepting “climate change is likely to kill everyone” does. so this makes the confounder smaller.
I think this is clearly a strawman. I’d also argue individual actors can have a much bigger impact on something like AI safety relative to the trajectory of climate change.
The actual post in question is not what I would classify as “maximally doomerish” or resigned at all, and I think it’s overly dismissive to turn the conversation towards “well you shouldn’t be maximally doomerish”.
I mean, sure, maybe maximal doomerish is not exactly the right term for me to use. but there’s definitely a tendency for people to be worried that being insufficiently emotionally scared and worried will make them complacent. to be clear, this is not about your epistemic p(doom); I happen to think AGI killing everyone is more likely than not. but really feeling this deeply emotionally is very counterproductive for my actually reducing x-risk.
To clarify, the original post was not meant to be resigned or maximally doomerish. I intend to win in worlds where winning is possible, and I was trying to get across the feeling of doing that while recognizing things are likely(?) to not be okay.
I agree that being in the daily, fight-or-flight, anxiety-inducing super-emergency mode of thought that thinking about x-risk can induce is very bad. But it’s important to note you can internalize the risks and probable futures very deeply, including emotionally, while still being productive, happy, sane, etc. High distaste for drama, forgiving yourself and picking yourself up, etc.
This is what I was trying to gesture at, and I think what Boaz is aiming at as well.
I think we are in agreement! It is definitely easier for me, given that I believe things are likely to be OK, but I still assign non-trivial likelihood to the possiblity they will not. But regardless of what you believe is more likely, I agree you should both (a) do what is feasible for you to have positive impact in the domains you can influence, and (b) keep being productive, happy, and sane without obsessing on factors you do not control.