I asked because I’m pretty sure that I’m being badly wasted (i.e. I could be making much more substantial contributions to AI safety),
I think this is the case for most in AI Safety rn
And by the way, your brainchild AI-Plans is a pretty cool resource. I can see it being useful for e.g. a frontier AI organization which thinks they have an alignment plan, but wants to check the literature to know what other ideas are out there.
Thanks! Doing a bunch of stuff atm, to make it easier to use and a larger userbase.
I think this is the case for most in AI Safety rn
Thanks! Doing a bunch of stuff atm, to make it easier to use and a larger userbase.