I don’t think many with monotonically increasing doom pay attention to current or any alignment research when they make their updates
Maybe I am just one of the “not many”. But I think this depends on how closely you track your timelines.
Personally, my timelines are uncertain enough that most of my substantial updates have been in the earlier direction (like from Median ~2050 to median 2030-2035). This probably happens to a lot of people who newly enter the field, because they naturally first put more emphasis on surveys like the one you mentioned.
I think my biggest ones were:
going from “taking the takes from capabilities researchers at face value, not having my own model and going with Metaculus” to “having my own views”.
GPT2 (…and the log loss still goes down) and then the same with GPT3. In the beginning, I still had substantial probability mass (30%) on this trend just not continuing.
Minverva (apparently getting language models to do math is not that hard (which was basically my last “trip wire” going off)).
I do think my P(doom) has slightly decreased from seeing everyone else finally freaking out.
Past me is trying to give himself too much credit here. Most of it was epistemic luck/high curiosity that lead him to join Søren Elverlin’s reading group in 2019 and then I just got exposed to the takes from the community.
Maybe I am just one of the “not many”. But I think this depends on how closely you track your timelines. Personally, my timelines are uncertain enough that most of my substantial updates have been in the earlier direction (like from Median ~2050 to median 2030-2035). This probably happens to a lot of people who newly enter the field, because they naturally first put more emphasis on surveys like the one you mentioned. I think my biggest ones were:
going from “taking the takes from capabilities researchers at face value, not having my own model and going with Metaculus” to “having my own views”.
GPT2 (…and the log loss still goes down) and then the same with GPT3. In the beginning, I still had substantial probability mass (30%) on this trend just not continuing.
Minverva (apparently getting language models to do math is not that hard (which was basically my last “trip wire” going off)).
I do think my P(doom) has slightly decreased from seeing everyone else finally freaking out.
Past me is trying to give himself too much credit here. Most of it was epistemic luck/high curiosity that lead him to join Søren Elverlin’s reading group in 2019 and then I just got exposed to the takes from the community.