a comment thread of mostly ai generated summaries of lesswrong posts so I can save them in a slightly public place for future copypasting but not show up in the comments of the posts themselves
The argument that concerns about future AI risks distract from current AI problems does not make logical sense when analyzed directly, as concerns can complement each other rather than compete for attention.
The real motivation behind this argument may be an implicit competition over group status and political influence, with endorsements of certain advocates seen as wins or losses.
Advocates for AI safety and those for addressing current harms are not necessarily opposed and could find areas of agreement like interpretability issues.
AI safety advocates should avoid framing their work as more important than current problems or that resources should shift, as this can antagonize allies.
Both future risks and current harms deserve consideration and efforts to address them can occur simultaneously rather than as a false choice.
Concerns over future AI risks come from a diverse range of political ideologies, not just tech elites, showing it is not a partisan issue.
Cause prioritization aiming to quantify and compare issues can seem offensive but is intended to help efforts have the greatest positive impact.
Rationalists concerned with AI safety also care about other issues not as consequential, showing ability to support multiple related causes.
Framing debates as zero-sum competitions undermines potential for cooperation between groups with aligned interests.
Building understanding and alliances across different advocacy communities could help maximize progress on AI and its challenges.
Experts like Yoshua Bengio have deep mental models of their field that allow them to systematically evaluate new ideas and understand barriers, while most others lack such models and rely more on trial and error.
Impostor syndrome may be correct in that most people genuinely don’t have deep understanding of their work in the way experts do, even if they are still skilled compared to others in their field.
Progress can still be made through random experimentation if a field has abundant opportunities and good feedback loops, even without deep understanding.
Claiming nobody understands anything provides emotional comfort but isn’t true—understanding varies significantly between experts and novices.
The real problem with impostor syndrome is the pressure to pretend one understands more than they do.
People should be transparent about what they don’t know and actively work to develop deeper mental models through experience.
The goal should be learning, not just obtaining credentials, by paying attention to what works and debugging failures.
Have long-term goals and evaluate work in terms of progress towards those goals.
Over time, actively working to understand one’s field leads to developing expertise rather than feeling like an impostor.
Widespread pretending of understanding enables a “civilizational LARP” that discourages truly learning one’s profession.
a comment thread of mostly ai generated summaries of lesswrong posts so I can save them in a slightly public place for future copypasting but not show up in the comments of the posts themselves
https://www.lesswrong.com/posts/uA4Dmm4cWxcGyANAa/x-distracts-from-y-as-a-thinly-disguised-fight-over-group
The argument that concerns about future AI risks distract from current AI problems does not make logical sense when analyzed directly, as concerns can complement each other rather than compete for attention.
The real motivation behind this argument may be an implicit competition over group status and political influence, with endorsements of certain advocates seen as wins or losses.
Advocates for AI safety and those for addressing current harms are not necessarily opposed and could find areas of agreement like interpretability issues.
AI safety advocates should avoid framing their work as more important than current problems or that resources should shift, as this can antagonize allies.
Both future risks and current harms deserve consideration and efforts to address them can occur simultaneously rather than as a false choice.
Concerns over future AI risks come from a diverse range of political ideologies, not just tech elites, showing it is not a partisan issue.
Cause prioritization aiming to quantify and compare issues can seem offensive but is intended to help efforts have the greatest positive impact.
Rationalists concerned with AI safety also care about other issues not as consequential, showing ability to support multiple related causes.
Framing debates as zero-sum competitions undermines potential for cooperation between groups with aligned interests.
Building understanding and alliances across different advocacy communities could help maximize progress on AI and its challenges.
https://www.lesswrong.com/posts/nt8PmADqKMaZLZGTC/inside-views-impostor-syndrome-and-the-great-larp
Experts like Yoshua Bengio have deep mental models of their field that allow them to systematically evaluate new ideas and understand barriers, while most others lack such models and rely more on trial and error.
Impostor syndrome may be correct in that most people genuinely don’t have deep understanding of their work in the way experts do, even if they are still skilled compared to others in their field.
Progress can still be made through random experimentation if a field has abundant opportunities and good feedback loops, even without deep understanding.
Claiming nobody understands anything provides emotional comfort but isn’t true—understanding varies significantly between experts and novices.
The real problem with impostor syndrome is the pressure to pretend one understands more than they do.
People should be transparent about what they don’t know and actively work to develop deeper mental models through experience.
The goal should be learning, not just obtaining credentials, by paying attention to what works and debugging failures.
Have long-term goals and evaluate work in terms of progress towards those goals.
Over time, actively working to understand one’s field leads to developing expertise rather than feeling like an impostor.
Widespread pretending of understanding enables a “civilizational LARP” that discourages truly learning one’s profession.