My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them.
I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
My prediction is that the Dell will be able to decide to do things of its own initiative.
‘on its own initiative’ looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.
I don’t think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).
However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
Let’s say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?
Hook them up to communicate with each other, and say “There’s a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we’re probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip.”
When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to ‘make life easier for the university IT department,’ you’ll know.
Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I’m not really sure which one I’d choose...
The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.
My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them.
I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
‘on its own initiative’ looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.
I don’t think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).
Let’s say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?
Hook them up to communicate with each other, and say “There’s a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we’re probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip.”
When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to ‘make life easier for the university IT department,’ you’ll know.
Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I’m not really sure which one I’d choose...
The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.
I haven’t the slightest idea. That’s the trouble with this definition.