creation of slow-thinking poorly understood unFriendly AGIs is not any help in developing a FAI (they can’t be “debugged” when you don’t have accurate understanding of what it is you are aiming for; and they can’t be “asked” to solve a problem which you can’t accurately state)
Given that AGI has not been achieved yet, and that an FAI will be an AGI, it seems like any AGI would serve as a useful prototype and give insight in to what tends to work for creating general intelligences.
If the prototype AGIs are to be built by people concerned with friendliness, it seems like they could be even more useful… testing out the feasibility of techniques that seem promising for inclusion in an FAI’s source code, for instance, or checking for flaws in some safety proposal, or doing some kind of theorem-proving work.
Given that AGI has not been achieved yet, and that an FAI will be an AGI, it seems like any AGI would serve as a useful prototype and give insight in to what tends to work for creating general intelligences.
If the prototype AGIs are to be built by people concerned with friendliness, it seems like they could be even more useful… testing out the feasibility of techniques that seem promising for inclusion in an FAI’s source code, for instance, or checking for flaws in some safety proposal, or doing some kind of theorem-proving work.