I think there’s a bunch of useful stuff in this post, and am generally very excited about having more cybersecurity experts working on AI safety. Having said that, it feels like a bit of a jump to say that LW (or AI safety overall) should become a hacker community, which would come with a lot of tradeoffs; and I think that this part detracts from the post overall.
I actually thought from the title that you meant “hacker community” as in “getting hands-on with AI, implementing lots of AI stuff” (i.e. hacker in the sense of hackathon). That feels more directly relevant, and in general I think LW would do better to have a less deontological attitude about contributing to AI-related products, and in general be much more encouraging of people getting hands-on with the latest models.
I think there’s a bunch of useful stuff in this post, and am generally very excited about having more cybersecurity experts working on AI safety. Having said that, it feels like a bit of a jump to say that LW (or AI safety overall) should become a hacker community, which would come with a lot of tradeoffs; and I think that this part detracts from the post overall.
I actually thought from the title that you meant “hacker community” as in “getting hands-on with AI, implementing lots of AI stuff” (i.e. hacker in the sense of hackathon). That feels more directly relevant, and in general I think LW would do better to have a less deontological attitude about contributing to AI-related products, and in general be much more encouraging of people getting hands-on with the latest models.