I am uncertain about the notion of using simulation or extrapolation to deduce the system operator’s intentions (as brought up in Section 5). Pitfall one is that the operator is human and subject to the usual passions and prejudices. Presumably there would be some mechanism in place to prevent the AI from carrying out the wishes of a human mad with power.
Pitfall two is a mathematical issue. Models of nonlinear phenomena can be very sensitive to initial conditions. In a complex model, it can be difficult to get a good error bounds. So, I’d ask just how complex a model one would need to get useful information and whether a model that complex is tractable. It seems to be taken for granted that one could accurately simulate someone else’s brain, but I’m not convinced.
Otherwise, it’s an interesting look at the difficulties inherent in divining human intentions. We have enough trouble getting our intentions and values across to other people. I figure that before we get a superintellgent AI, we’ll go through a number of stupid ones followed by mediocre ones. Hopefully the experience will grant some further insight into these problems and suggest a good approach.
I am uncertain about the notion of using simulation or extrapolation to deduce the system operator’s intentions (as brought up in Section 5). Pitfall one is that the operator is human and subject to the usual passions and prejudices. Presumably there would be some mechanism in place to prevent the AI from carrying out the wishes of a human mad with power.
Pitfall two is a mathematical issue. Models of nonlinear phenomena can be very sensitive to initial conditions. In a complex model, it can be difficult to get a good error bounds. So, I’d ask just how complex a model one would need to get useful information and whether a model that complex is tractable. It seems to be taken for granted that one could accurately simulate someone else’s brain, but I’m not convinced.
Otherwise, it’s an interesting look at the difficulties inherent in divining human intentions. We have enough trouble getting our intentions and values across to other people. I figure that before we get a superintellgent AI, we’ll go through a number of stupid ones followed by mediocre ones. Hopefully the experience will grant some further insight into these problems and suggest a good approach.