Very simple gears in a subculture’s worldview can keep being systematically misperceived if it’s not considered worthy of curious attention. On the local llama subreddit, I keep seeing assumptions that AI safety people call for never developing AGI, or claim that the current models can contribute to destroying the world. Almost never is there anyone who would bother to contradict such claims or assumptions. This doesn’t happen because it’s difficult to figure out, this happens because the AI safety subculture is seen as unworthy of engagement, and so people don’t learn what it’s actually saying, and don’t correct each other on errors about what it’s actually saying.
This gets far worse with more subtle details, the standard of willingness to engage is raised higher to actually study what the others are saying, when it would be difficult to figure out even with curious attention. Rewarding engagement is important.
Yeah, the fact that the responses to the optimistic arguments sometimes rely on simply not engaging with it at all in detail has really dimmed my prospects of reaching out, and causes me to think poorer of AI doom, epistemically.
This actually happened before with Eliezer’s Comment here:
Very simple gears in a subculture’s worldview can keep being systematically misperceived if it’s not considered worthy of curious attention. On the local llama subreddit, I keep seeing assumptions that AI safety people call for never developing AGI, or claim that the current models can contribute to destroying the world. Almost never is there anyone who would bother to contradict such claims or assumptions. This doesn’t happen because it’s difficult to figure out, this happens because the AI safety subculture is seen as unworthy of engagement, and so people don’t learn what it’s actually saying, and don’t correct each other on errors about what it’s actually saying.
This gets far worse with more subtle details, the standard of willingness to engage is raised higher to actually study what the others are saying, when it would be difficult to figure out even with curious attention. Rewarding engagement is important.
I agree. It’s rare enough to get reasonable arguments for optimistic outlooks, so this seems worth for someone to openly engage with in some detail.
Yeah, the fact that the responses to the optimistic arguments sometimes rely on simply not engaging with it at all in detail has really dimmed my prospects of reaching out, and causes me to think poorer of AI doom, epistemically.
This actually happened before with Eliezer’s Comment here:
https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#YYR4hEFRmA7cb5csy