designing technology is a special case of prediction
It’s possible to be very good at prediction but still rather bad at design. Suppose you have a black box that does physics simulations with perfect accuracy. Then you can predict exactly what will happen if you build any given thing, by asking the black box. But it won’t, of itself, give you ideas about what things to ask it about, or understanding of why it produces the results it does beyond “that’s how the physics works out”.
(To be good at design you do, I think, need to be pretty good at prediction.)
I remember just a couple weeks ago,a paper from an AI convention. Researchers asked an AI to design a circuit board, and it included a few components not connected to the circuit at all, but they still assisted in the circuit functioning.
Those who speak of FAI generally understand by it that we should be able to predict various things about what an AI will do: e.g., it will not bring about a future in which all that we hold dear is destroyed.
Clearly that’s difficult. It may be impossible. But your objection seems to apply equally to designing a chess-playing program with the intention that it will play much better chess than its creator, which is a thing that has been done successfully many times.
It’s possible to be very good at prediction but still rather bad at design. Suppose you have a black box that does physics simulations with perfect accuracy. Then you can predict exactly what will happen if you build any given thing, by asking the black box. But it won’t, of itself, give you ideas about what things to ask it about, or understanding of why it produces the results it does beyond “that’s how the physics works out”.
(To be good at design you do, I think, need to be pretty good at prediction.)
I remember just a couple weeks ago,a paper from an AI convention. Researchers asked an AI to design a circuit board, and it included a few components not connected to the circuit at all, but they still assisted in the circuit functioning.
I think you’re thinking of this. Would you like to be more explicit about its application here?
Then this whole endeavour is doomed, because part of the point of designing AGI is that we don’t know what it’ll do.
Those who speak of FAI generally understand by it that we should be able to predict various things about what an AI will do: e.g., it will not bring about a future in which all that we hold dear is destroyed.
Clearly that’s difficult. It may be impossible. But your objection seems to apply equally to designing a chess-playing program with the intention that it will play much better chess than its creator, which is a thing that has been done successfully many times.