It would be consistent with the pattern that short-timeline doomy predictions get signal-boosted here, and wouldn’t rule out the something-about-trauma-on-LessWrong hypothesis for that signal-boosting. No doubt about that! But I wasn’t talking about which predictions get signal-boosted; I was talking about which predictions get made, and in particular why the predictions in AI 2027 were made.
Consider Jane McKindaNormal, who has never heard of LessWrong and isn’t really part of the cluster at all. I wouldn’t guess that a widespread pattern among LessWrong users had affected Jane’s predictions regarding AI progress. (Eh, not directly, at least...) If Jane were the sole author of AI 2027, I wouldn’t guess that she’s making short-timeline doomy predictions because people are doing so on LessWrong. If all of her predictions were wrong, I wouldn’t guess that she mispredicted because of something-about-trauma-on-LessWrong. Perhaps she could have mispredicted because of something-about-trauma-by-herself, but there are a lot of other hypotheses hanging around, and I wouldn’t start with the hard-to-falsify ones about her upbringing.
I realized, after some thought, that the AI 2027 authors are part of the cluster, and I hadn’t taken that into account. “Oh, that might be it,” I thought. “OP is saying that we should (prepare to) ask if Kokotajlo et al, specifically, have preverbal trauma that influenced their timeline forecasts. That seemed bizarre to me at first, but it makes some sense to ask that because they’re part of the LW neighborhood, where other people are showing signs of the same thing. We wouldn’t ask this about Jane McKindaNormal.” Hence the question, to make sure that I had figured out my mistake. But it looks like I was still wrong. Now my thoughts are more like, “Eh, looks like I was focusing too hard on a few sentences and misinterpreting them. The OP is less focused on why some people have short timelines, and more on how those timelines get signal-boosted while others don’t.” (Maybe that’s still not exactly right, though.)
It would be consistent with the pattern that short-timeline doomy predictions get signal-boosted here, and wouldn’t rule out the something-about-trauma-on-LessWrong hypothesis for that signal-boosting. No doubt about that! But I wasn’t talking about which predictions get signal-boosted; I was talking about which predictions get made, and in particular why the predictions in AI 2027 were made.
Consider Jane McKindaNormal, who has never heard of LessWrong and isn’t really part of the cluster at all. I wouldn’t guess that a widespread pattern among LessWrong users had affected Jane’s predictions regarding AI progress. (Eh, not directly, at least...) If Jane were the sole author of AI 2027, I wouldn’t guess that she’s making short-timeline doomy predictions because people are doing so on LessWrong. If all of her predictions were wrong, I wouldn’t guess that she mispredicted because of something-about-trauma-on-LessWrong. Perhaps she could have mispredicted because of something-about-trauma-by-herself, but there are a lot of other hypotheses hanging around, and I wouldn’t start with the hard-to-falsify ones about her upbringing.
I realized, after some thought, that the AI 2027 authors are part of the cluster, and I hadn’t taken that into account. “Oh, that might be it,” I thought. “OP is saying that we should (prepare to) ask if Kokotajlo et al, specifically, have preverbal trauma that influenced their timeline forecasts. That seemed bizarre to me at first, but it makes some sense to ask that because they’re part of the LW neighborhood, where other people are showing signs of the same thing. We wouldn’t ask this about Jane McKindaNormal.” Hence the question, to make sure that I had figured out my mistake. But it looks like I was still wrong. Now my thoughts are more like, “Eh, looks like I was focusing too hard on a few sentences and misinterpreting them. The OP is less focused on why some people have short timelines, and more on how those timelines get signal-boosted while others don’t.” (Maybe that’s still not exactly right, though.)