I think your remarks suggest that alignment to the level of top humans will happen by default, but not alignment to god-like superintelligence. That said, if we get aligned top-human AIs, then we can defer the rest of the alignment problem to them.
If I were sure that top-human-level AIs will be aligned by default, here’s what I might work on instead:
Automated philosophy
Commitment races / safe bargaining
Animal suffering
Space governance
Coordination tech
Empowering people with good values
Archiving data that aligned AIs might need (e.g. cryonics)
I think your remarks suggest that alignment to the level of top humans will happen by default, but not alignment to god-like superintelligence. That said, if we get aligned top-human AIs, then we can defer the rest of the alignment problem to them.
If I were sure that top-human-level AIs will be aligned by default, here’s what I might work on instead:
Automated philosophy
Commitment races / safe bargaining
Animal suffering
Space governance
Coordination tech
Empowering people with good values
Archiving data that aligned AIs might need (e.g. cryonics)
Thank you for your list! 3,5,6 seem like the best candidates to me :)