Some people here seem to think that motivated reasoning is only something that people who want an outcome do, meaning that people concerned about doom and catastrophe can’t possibly be susceptible. This is a mistake. Everyone desires vindication. No one want to be that guy that was so cautious that he fails to be praised for his insight. This drives people to favoring extreme outcomes, because extreme views are much more attention grabbing and a chance to be seen as right feels a lot better than being wrong feels bad (It’s easy to avoid fault for false predictions and claim credit for true ones).
Obviously, this is just one possible bias, maybe Daniel and others with super short timelines are still very well calibrated. But it bares consideration.
Not only is that just one possible bias, it’s a less-common bias than its opposite. Generally speaking, more people are afraid to stick their necks out and say something extreme than actively biased towards doing so. Generally speaking, being wrong feels more bad than being right feels good. There are exceptions; some people are contrarians, for example (and so it’s plausible I’m one of them) but again, talking about people in general, the bias goes in the opposite direction from what you say.
Definitely. Excellent point. See my short bit on motivated reasoning, in lieu of the full post I have on the stack that will address its effects on alignment research.
I frequently check how to correct my timelines and takes based on potential motivated reasoning effects for myself. The result is usually to broaden my estimates and add uncertainty, because it’s difficult to identify which direction MR might’ve been pushing me during all of the mini-decisions that led to forming my beliefs and models. My motivations are many and which happened to be contextually relevant at key decision points is hard to guess.
On the whole I’d have to guess that MR effects are on average larger on long timelines and low p(dooms). They both allow us to imagine a sunny near future, and to work on our preferred projects instead of panicking and having to shift to work that can help with alignment if AGI happens soon. Sorry. This is worth a much more careful discussion, that’s just my guess in the absence of pushback.
Some people here seem to think that motivated reasoning is only something that people who want an outcome do, meaning that people concerned about doom and catastrophe can’t possibly be susceptible. This is a mistake. Everyone desires vindication. No one want to be that guy that was so cautious that he fails to be praised for his insight. This drives people to favoring extreme outcomes, because extreme views are much more attention grabbing and a chance to be seen as right feels a lot better than being wrong feels bad (It’s easy to avoid fault for false predictions and claim credit for true ones).
Obviously, this is just one possible bias, maybe Daniel and others with super short timelines are still very well calibrated. But it bares consideration.
Not only is that just one possible bias, it’s a less-common bias than its opposite. Generally speaking, more people are afraid to stick their necks out and say something extreme than actively biased towards doing so. Generally speaking, being wrong feels more bad than being right feels good. There are exceptions; some people are contrarians, for example (and so it’s plausible I’m one of them) but again, talking about people in general, the bias goes in the opposite direction from what you say.
Definitely. Excellent point. See my short bit on motivated reasoning, in lieu of the full post I have on the stack that will address its effects on alignment research.
I frequently check how to correct my timelines and takes based on potential motivated reasoning effects for myself. The result is usually to broaden my estimates and add uncertainty, because it’s difficult to identify which direction MR might’ve been pushing me during all of the mini-decisions that led to forming my beliefs and models. My motivations are many and which happened to be contextually relevant at key decision points is hard to guess.
On the whole I’d have to guess that MR effects are on average larger on long timelines and low p(dooms). They both allow us to imagine a sunny near future, and to work on our preferred projects instead of panicking and having to shift to work that can help with alignment if AGI happens soon. Sorry. This is worth a much more careful discussion, that’s just my guess in the absence of pushback.