I would not have any kind of full plan. But I am writing on a series where I explore the topic of “How might we get help from an unaligned superintelligent AGI-system to make an aligned superintelligent AGI-system, while trying to minimize risk and minimize probability of being tricked?”. Only part 1 completed so far: https://www.lesswrong.com/posts/ZmZBataeY58anJRBb/getting-from-unaligned-to-aligned-agi-assisted-alignment
I would not have any kind of full plan. But I am writing on a series where I explore the topic of “How might we get help from an unaligned superintelligent AGI-system to make an aligned superintelligent AGI-system, while trying to minimize risk and minimize probability of being tricked?”. Only part 1 completed so far: https://www.lesswrong.com/posts/ZmZBataeY58anJRBb/getting-from-unaligned-to-aligned-agi-assisted-alignment