I don’t have a direct answer to your question, so for now let’s say that AI will not in fact “want” anything if not explicitly asked. This seems plausible to me, but also totally irrelevant from a practical perspective—who’s going to build an entire freakin’ superintelligence, and then just never have it do do anything!? In order for a program to even communicate words or data to us it’s going to need to have some sort of drive to do so, since otherwise it would remain silent and we’d effectively have a very expensive silicon brick on our hands. So while in theory it may be possible to build an AI without wants, in practice there is always something an AI will be “trying to do”.
I didn’t mean that it wouldn’t do anything, ever. Just that it will do what is asked, which creates its own set of issues, of course, if it kills us in the process. But I can imagine that it still will not have anything that could be intentional-stanced described as wants or desires.
I don’t have a direct answer to your question, so for now let’s say that AI will not in fact “want” anything if not explicitly asked. This seems plausible to me, but also totally irrelevant from a practical perspective—who’s going to build an entire freakin’ superintelligence, and then just never have it do do anything!? In order for a program to even communicate words or data to us it’s going to need to have some sort of drive to do so, since otherwise it would remain silent and we’d effectively have a very expensive silicon brick on our hands. So while in theory it may be possible to build an AI without wants, in practice there is always something an AI will be “trying to do”.
I didn’t mean that it wouldn’t do anything, ever. Just that it will do what is asked, which creates its own set of issues, of course, if it kills us in the process. But I can imagine that it still will not have anything that could be intentional-stanced described as wants or desires.