Assuming that you think you’d look into it in a reasonable way, then you’d be much more likely to reach a doomy conclusion if it were actually true.
This is too optimistic assumption. On one hand, we have Kirsten’s ability to do AI research. On the other hand, we have all the social pressure that Kirsten complains about. You seem to assume that the former is greater than the latter, which may or may not be true (no offense meant).
An analogy with religion is telling someone to make an independent research about the historical truth about Jesus. In theory, that should work. In practice… maybe that person has no special talent for doing historical research; plus there is always the knowledge in the background that arriving at the incorrect answer would cost them all their current friends anyway (which I hope does not work the same with EAs, but the people who can’t stop talking about the doom now probably won’t be able to stop talking about it even if Kirsten tells them “I have done my research, and I disagree”).
My response to both paragraphs is that the relevant counterfactual is “not looking into/talking about AI risks.” I claim that there is at least as much social pressure from the community to take AI risk seriously and to talk about it as there is to reach a pessimistic conclusion, and that people are very unlikely to lose “all their current friends” by arriving at an “incorrect” conclusion if their current friends are already fine with the person not having any view at all on AI risks.
This is too optimistic assumption. On one hand, we have Kirsten’s ability to do AI research. On the other hand, we have all the social pressure that Kirsten complains about. You seem to assume that the former is greater than the latter, which may or may not be true (no offense meant).
An analogy with religion is telling someone to make an independent research about the historical truth about Jesus. In theory, that should work. In practice… maybe that person has no special talent for doing historical research; plus there is always the knowledge in the background that arriving at the incorrect answer would cost them all their current friends anyway (which I hope does not work the same with EAs, but the people who can’t stop talking about the doom now probably won’t be able to stop talking about it even if Kirsten tells them “I have done my research, and I disagree”).
This is exactly how I feel; thank you for articulating it so well!
My response to both paragraphs is that the relevant counterfactual is “not looking into/talking about AI risks.” I claim that there is at least as much social pressure from the community to take AI risk seriously and to talk about it as there is to reach a pessimistic conclusion, and that people are very unlikely to lose “all their current friends” by arriving at an “incorrect” conclusion if their current friends are already fine with the person not having any view at all on AI risks.