Regarding tools versus agent AGI’s, I think the desired end game is still an Friendly Agent AGI. I am open to tool AIs being useful in the path to building such an agent. Similar ideas advocated by SI include use of automated theorem provers in formally proving Friendliness, and creating a seed AI to compute the Coherent Extropolated Volition of humanity and build an FAI with the appropiate utility function.
Regarding tools versus agent AGI’s, I think the desired end game is still an Friendly Agent AGI. I am open to tool AIs being useful in the path to building such an agent. Similar ideas advocated by SI include use of automated theorem provers in formally proving Friendliness, and creating a seed AI to compute the Coherent Extropolated Volition of humanity and build an FAI with the appropiate utility function.