I’m hesitant to call those models of any kind; they don’t include any kind of abstraction, either of the program’s internal state or of inferred enemy state. It’s just running the same algorithm on different initial conditions; granted, this is muddled a little because classical chess AI doesn’t have much internal state to speak of, just the state of the board and a tree of possible moves from there. Two copies of the same chess algorithm running against each other might be said to have a (uniquely perfect) model of their enemies, but that’s more or less accidental.
I’d have to disagree about humans not doing other-modeling, though. As best I can tell we evaluate our actions relative to others primarily based on how we believe those actions affect their disposition toward us, and then infer people’s actions and their effects on us from there. Few people take it much farther than that, but two or sometimes three levels of recursion is more than enough for this sort of modeling to be meaningful.
Actually they don’t have perfect models, the model does fewer moves ahead.
With regards to what people are doing, i mean, we don’t play chess like this. Yes, we model other people’s state, but quite badly. The people who overthink it fail horribly at social interaction.
With chess, you could blank out the lines 1 to 3 and 6 to 8 for first 10 moves, or the like, then you got some private states for AIs to model edit: or implement fog of war, pieces only see what they are attacking. Doesn’t make any fundamental differences here. Except now there’s some private state, and the things enemy knows and things enemy doesn’t know, and the assumptions enemy makes about where your pieces are, etc. (the private things are on the board, but so our private thoughts are inside our non-transparent skulls, on the board of universe)
The issue as i said earlier is that we have internal definition what self awareness is—something that humans all have, smart animals maybe have, and simple AIs can’t have, and then we try making some external definition that’d work like this without mentioning humans, except the world is not so convenient and what ever definition you make there’s simple AI that does it.
Yeah, that’s an acceptable way to give a chess AI internal state (or you could just use some parameters for its style of play, like I was discussing a few posts up). I’d call a chess AI that tracked its own state and made inferences about its opponent’s knowledge of it self-aware (albeit with a very simple self in a very simple set of rules), but I suspect you’d find this quite difficult to handle well in practice. Fog of war is almost universally ignored by AI in strategy games that implement it, for example.
Self-awareness isn’t magical, and it probably isn’t enough to solve the problem of consciousness, but I don’t think it’s as basic a concept as you’re implying either.
I’m hesitant to call those models of any kind; they don’t include any kind of abstraction, either of the program’s internal state or of inferred enemy state. It’s just running the same algorithm on different initial conditions; granted, this is muddled a little because classical chess AI doesn’t have much internal state to speak of, just the state of the board and a tree of possible moves from there. Two copies of the same chess algorithm running against each other might be said to have a (uniquely perfect) model of their enemies, but that’s more or less accidental.
I’d have to disagree about humans not doing other-modeling, though. As best I can tell we evaluate our actions relative to others primarily based on how we believe those actions affect their disposition toward us, and then infer people’s actions and their effects on us from there. Few people take it much farther than that, but two or sometimes three levels of recursion is more than enough for this sort of modeling to be meaningful.
Actually they don’t have perfect models, the model does fewer moves ahead.
With regards to what people are doing, i mean, we don’t play chess like this. Yes, we model other people’s state, but quite badly. The people who overthink it fail horribly at social interaction.
With chess, you could blank out the lines 1 to 3 and 6 to 8 for first 10 moves, or the like, then you got some private states for AIs to model edit: or implement fog of war, pieces only see what they are attacking. Doesn’t make any fundamental differences here. Except now there’s some private state, and the things enemy knows and things enemy doesn’t know, and the assumptions enemy makes about where your pieces are, etc. (the private things are on the board, but so our private thoughts are inside our non-transparent skulls, on the board of universe)
The issue as i said earlier is that we have internal definition what self awareness is—something that humans all have, smart animals maybe have, and simple AIs can’t have, and then we try making some external definition that’d work like this without mentioning humans, except the world is not so convenient and what ever definition you make there’s simple AI that does it.
Yeah, that’s an acceptable way to give a chess AI internal state (or you could just use some parameters for its style of play, like I was discussing a few posts up). I’d call a chess AI that tracked its own state and made inferences about its opponent’s knowledge of it self-aware (albeit with a very simple self in a very simple set of rules), but I suspect you’d find this quite difficult to handle well in practice. Fog of war is almost universally ignored by AI in strategy games that implement it, for example.
Self-awareness isn’t magical, and it probably isn’t enough to solve the problem of consciousness, but I don’t think it’s as basic a concept as you’re implying either.