Why did the Alignment community not prepare tools and plans for convincing the wider infosphere about AI safety years in advance?
I’ve been organizing the volunteer team who built AI Safety Info for the past two and a half years, alongside building a whole raft of other tools like AI Safety Training and AI Safety World.
But, yes, the movement as a whole has dropped the ball pretty hard on basic prep. The real answer is that things are not done by default, and this subculture has relatively few do-ers compared to thinkers. And the thinkers had very little faith in the wider info-sphere, sometime actively discouraging most do-ers from trying broad outreach.
I find it interesting that you are the second commenter (and Dan H above) to jump in and explicitly say: I have been doing that!
and point to great previous work doing exactly these things, but from my perspective they do not seem widely known or supported within the community here (I could be wrong about that)
I am starting to feel that I have a bad map of the AI Alignment/Safety community. My previous impression was the lesswrong / MIRI was mostly the epicenter, and if much of anything was being done it was coming from there or at least was well known there. That seems not to be the case—Which is encouraging! (I think)
feel that I have a bad map of the AI Alignment/Safety community
This is true of many people, and why I built the map of AI safety :)
Next step is to rebuild aisafety.com into a homepage which ties all of this together, and offer AI Safety Info’s database via an API for other websites (like aisafety.com, and hopefully lesswrong) to embed.
I’ve been organizing the volunteer team who built AI Safety Info for the past two and a half years, alongside building a whole raft of other tools like AI Safety Training and AI Safety World.
But, yes, the movement as a whole has dropped the ball pretty hard on basic prep. The real answer is that things are not done by default, and this subculture has relatively few do-ers compared to thinkers. And the thinkers had very little faith in the wider info-sphere, sometime actively discouraging most do-ers from trying broad outreach.
Great websites!
I find it interesting that you are the second commenter (and Dan H above) to jump in and explicitly say: I have been doing that!
and point to great previous work doing exactly these things, but from my perspective they do not seem widely known or supported within the community here (I could be wrong about that)
I am starting to feel that I have a bad map of the AI Alignment/Safety community. My previous impression was the lesswrong / MIRI was mostly the epicenter, and if much of anything was being done it was coming from there or at least was well known there. That seems not to be the case—Which is encouraging! (I think)
This is true of many people, and why I built the map of AI safety :)
Next step is to rebuild aisafety.com into a homepage which ties all of this together, and offer AI Safety Info’s database via an API for other websites (like aisafety.com, and hopefully lesswrong) to embed.