I don’t really agree. The key thing is that I think an exit plan of trustworthy AIs capable enough to obsolete all humans working on safety (but which aren’t superintelligent) is pretty promising. Yes, these AIs might need to think of novel breakthroughs and new ideas (though I’m also not totally confident in this or that this is the best route), but I don’t think we need new research agendas to substantially increase the probability these non-superintelligent AIs are well aligned (e.g., don’t conspire against us and pursue our interests in hard open ended tasks), can do open ended work competently enough, and are wise.
See this comment for some more discussion and I’ve also sent you some content in a DM on this.
Fair point. I guess I still want to say that there’s a substantial amount of ‘come up with new research agendas’ (or like sub-agendas) to be done within each of your bullet points, but I agree the focus on getting trustworthy slightly superhuman AIs and then not needing control anymore makes things much better. I also do feel pretty nervous about some of those bullet points as paths to placing so much trust in your AI systems that you don’t feel like you want to bother controlling/monitoring them anymore, and the ones that seem further towards giving me enough trust in the AIs to stop control are also the ones that seem to have the most very open research questions (eg EMs in the extreme case). But I do want to walk back some of the things in my comment above that apply only to aligning very superintelligent AI.
I don’t really agree. The key thing is that I think an exit plan of trustworthy AIs capable enough to obsolete all humans working on safety (but which aren’t superintelligent) is pretty promising. Yes, these AIs might need to think of novel breakthroughs and new ideas (though I’m also not totally confident in this or that this is the best route), but I don’t think we need new research agendas to substantially increase the probability these non-superintelligent AIs are well aligned (e.g., don’t conspire against us and pursue our interests in hard open ended tasks), can do open ended work competently enough, and are wise.
See this comment for some more discussion and I’ve also sent you some content in a DM on this.
Fair point. I guess I still want to say that there’s a substantial amount of ‘come up with new research agendas’ (or like sub-agendas) to be done within each of your bullet points, but I agree the focus on getting trustworthy slightly superhuman AIs and then not needing control anymore makes things much better. I also do feel pretty nervous about some of those bullet points as paths to placing so much trust in your AI systems that you don’t feel like you want to bother controlling/monitoring them anymore, and the ones that seem further towards giving me enough trust in the AIs to stop control are also the ones that seem to have the most very open research questions (eg EMs in the extreme case). But I do want to walk back some of the things in my comment above that apply only to aligning very superintelligent AI.