Did you mean to write “build a Task AI to perform a pivotal act in service of reducing x-risks”? Or did MIRI switch from one to the other at some point early on? I don’t know the history. …But it doesn’t matter, my comment applies to both.
I believe that there was an intentional switch, around 2016 (though I’m not confident in the date), from aiming to design a Friendly CEV-optimizing sovereign AI, to aiming to design a corrigible minimal-Science-And-Engineering-AI to stabilize the world (after which a team of probably-uploads could solve the full version of Friendliness and kick off a foom.)
I believe that there was an intentional switch, around 2016 (though I’m not confident in the date), from aiming to design a Friendly CEV-optimizing sovereign AI, to aiming to design a corrigible minimal-Science-And-Engineering-AI to stabilize the world (after which a team of probably-uploads could solve the full version of Friendliness and kick off a foom.)