Hi all, I am a new community member and it is a pleasure to be here. I am working on a post draft where I try to discuss possible opportunities for using safety R&D automation to find/create effective AI superintelligence safety interventions (which may not exist yet or not be prioritized yet), in particular for short timeline scenarios (e.g. ASI within 2-6 years from now, after a possible intelligence explosion enabled/caused by AGI creation within 1-3 years from now).
I would be grateful if anyone had any pointers to existing discussions on this specific topic on LessWrong that I may not have found yet so I can visit, learn from, and reference. I do know about the ‘superintelligence’ tag and will go through these posts- I just wanted to see if anything springs to mind from experienced users. Thank you!
Hi all, I am a new community member and it is a pleasure to be here. I am working on a post draft where I try to discuss possible opportunities for using safety R&D automation to find/create effective AI superintelligence safety interventions (which may not exist yet or not be prioritized yet), in particular for short timeline scenarios (e.g. ASI within 2-6 years from now, after a possible intelligence explosion enabled/caused by AGI creation within 1-3 years from now).
I would be grateful if anyone had any pointers to existing discussions on this specific topic on LessWrong that I may not have found yet so I can visit, learn from, and reference. I do know about the ‘superintelligence’ tag and will go through these posts- I just wanted to see if anything springs to mind from experienced users. Thank you!