Well so what if they can predict the future better? That’s certainly one possible advantage of AI, but it’s far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans.
The way I think of it, designing technology is a special case of prediction. E.g. to design a steam engine, you need to be able to predict how steam behaves in different conditions and whether, given some candidate design, the pressure from the steam will be transformed into useful work or not.
designing technology is a special case of prediction
It’s possible to be very good at prediction but still rather bad at design. Suppose you have a black box that does physics simulations with perfect accuracy. Then you can predict exactly what will happen if you build any given thing, by asking the black box. But it won’t, of itself, give you ideas about what things to ask it about, or understanding of why it produces the results it does beyond “that’s how the physics works out”.
(To be good at design you do, I think, need to be pretty good at prediction.)
I remember just a couple weeks ago,a paper from an AI convention. Researchers asked an AI to design a circuit board, and it included a few components not connected to the circuit at all, but they still assisted in the circuit functioning.
Those who speak of FAI generally understand by it that we should be able to predict various things about what an AI will do: e.g., it will not bring about a future in which all that we hold dear is destroyed.
Clearly that’s difficult. It may be impossible. But your objection seems to apply equally to designing a chess-playing program with the intention that it will play much better chess than its creator, which is a thing that has been done successfully many times.
The way I think of it, designing technology is a special case of prediction. E.g. to design a steam engine, you need to be able to predict how steam behaves in different conditions and whether, given some candidate design, the pressure from the steam will be transformed into useful work or not.
It’s possible to be very good at prediction but still rather bad at design. Suppose you have a black box that does physics simulations with perfect accuracy. Then you can predict exactly what will happen if you build any given thing, by asking the black box. But it won’t, of itself, give you ideas about what things to ask it about, or understanding of why it produces the results it does beyond “that’s how the physics works out”.
(To be good at design you do, I think, need to be pretty good at prediction.)
I remember just a couple weeks ago,a paper from an AI convention. Researchers asked an AI to design a circuit board, and it included a few components not connected to the circuit at all, but they still assisted in the circuit functioning.
I think you’re thinking of this. Would you like to be more explicit about its application here?
Then this whole endeavour is doomed, because part of the point of designing AGI is that we don’t know what it’ll do.
Those who speak of FAI generally understand by it that we should be able to predict various things about what an AI will do: e.g., it will not bring about a future in which all that we hold dear is destroyed.
Clearly that’s difficult. It may be impossible. But your objection seems to apply equally to designing a chess-playing program with the intention that it will play much better chess than its creator, which is a thing that has been done successfully many times.
You can design things based on a priori prediction, but you don’t have to in many cases...you can also use trial and error instead.