Definitely. Excellent point. See my short bit on motivated reasoning, in lieu of the full post I have on the stack that will address its effects on alignment research.
I frequently check how to correct my timelines and takes based on potential motivated reasoning effects for myself. The result is usually to broaden my estimates and add uncertainty, because it’s difficult to identify which direction MR might’ve been pushing me during all of the mini-decisions that led to forming my beliefs and models. My motivations are many and which happened to be contextually relevant at key decision points is hard to guess.
On the whole I’d have to guess that MR effects are on average larger on long timelines and low p(dooms). They both allow us to imagine a sunny near future, and to work on our preferred projects instead of panicking and having to shift to work that can help with alignment if AGI happens soon. Sorry. This is worth a much more careful discussion, that’s just my guess in the absence of pushback.
Definitely. Excellent point. See my short bit on motivated reasoning, in lieu of the full post I have on the stack that will address its effects on alignment research.
I frequently check how to correct my timelines and takes based on potential motivated reasoning effects for myself. The result is usually to broaden my estimates and add uncertainty, because it’s difficult to identify which direction MR might’ve been pushing me during all of the mini-decisions that led to forming my beliefs and models. My motivations are many and which happened to be contextually relevant at key decision points is hard to guess.
On the whole I’d have to guess that MR effects are on average larger on long timelines and low p(dooms). They both allow us to imagine a sunny near future, and to work on our preferred projects instead of panicking and having to shift to work that can help with alignment if AGI happens soon. Sorry. This is worth a much more careful discussion, that’s just my guess in the absence of pushback.