Ya, I thought about it. To me the special thing about self awareness is that I feel my mind internally in a way that I can not feel any of the other objects—yet I see it as just having some sort of loop inside that turns some of the internal data into qualia, an implementation detail. Without this I’d need to talk out loud for “I think therefore I am”, or infer existence of self from observing myself move my hands with my own eyes; that may be inconvenient and could impede the realization that there exist ‘self’, or require more thinking to determine that self exist, and could result in reduced reproductive fitness (self preservation could fail). But it would not exclude experiencing of the qualia (it could of made it hard to talk about the qualia).
My understanding is that p-zombies are devoid of qualia, but not of self-knowledge.
I have other loops, even those that most people lack—if i close my eyes i still see outline of my hands (without seeing colours or any other properties—just the outline), that’s a form of synaesthesia. I can’t turn it off any more than colour-graphene synaesthete can turn off his ‘syntax highlighting’.
This suggests the special thing about humans is (a) that they model other humans and (b) that model includes assuming the other person has an awareness of self; and that animals aren’t modeled as humans so we don’t start with the assumption.
I dunno, I can make game AI for in-game airplane that will model itself, the targets, the target AIs, and the target AI’s self modelling, and the target AI’s modelling of the attacker AI, and so on. And the AI will still be about as smart as a brain damaged fruit fly.
[ I can’t really prove that fruit flies, or parrots, or crows, or dogs, or apes, or other humans, do this kind of thing, but i have no reason whatsoever to presume that they can’t do that, if an AI that i write for a computer game can do it with quite little computing power]
The chess AI, for one thing, does just that, in it’s most pure form. When testing a potential move, it makes a move (in the memory), and then invokes other side’s model (adapted self model) to predict the other side’s move in that situation, and that invokes it’s self model to predict it’s own move, and so on.
With the animals… The cat on a chair next to me is cleaning herself. Cats like to be clean. etc. etc. I think we start with this assumption as children, and then we start wanting to be special / get taught we’re so special and shouldn’t care about animals / have to kill animals for food, and then we start redefining stuff in very odd ways so that humans be self aware, and nothing else would. And in result end up with self awareness being undefined coz the logical world is not very convenient and we can’t come up with any definition of self awareness so that some fairly stupid systems wouldn’t be self aware. That may well be also why consciousness and the like is so ill defined.
edit:
Perhaps we want a concise definition that fits specific purpose—humans are X [and perhaps other very big brained animals which we don’t need to be killing] but nothing else is X, we want to express this without appearing to make humans special. But to our dismay, such definition simply doesn’t exist. And we either end up with nonsense that allows for a world where everyone else is a p-zombie—the humans may not have X—or we end up with some good definition which unfortunately allows even very simple systems to be X, and then nobody wants to accept this definition.
The chess AI, for one thing, does just that, in it’s most pure form. When testing a potential move, it makes a move (in the memory), and then invokes other side’s model (adapted self model) to predict the other side’s move in that situation, and that invokes it’s self model to predict it’s own move, and so on.
This doesn’t seem quite right. A relevant analogy to “self” in a chess AI wouldn’t be the black side, or the current configuration of the board from Black’s perspective, or even the tree of Black’s projected best moves; that’s all state, more analogous to bodies or worlds than selves. A better analogy to “self” would be the inferred characteristics of the algorithm running Black: how aggressive it is, whether it goes for positional play or captures, whether it really likes the Four Knights Game, and so forth.
Some chess AI does track that sort of thing. But it’s not remotely within the scope of classical chess AI as I understand it (bearing in mind that my only serious exposure to chess AI took a decidedly nonclassical approach), and I’ve never heard of a chess AI that tracked it recursively, factoring in inferred changes in the opponent’s behavior if the AI’s own parameters are tweaked. It’d be possible, of course, but very computationally intensive for all but the simplest games, and probably fairly low-fidelity given that computers don’t play much like humans in most of the games I’m aware of.
Why the algorithm itself doesn’t count as the self? The algorithm has a self model: the self computing for 2 ply less deep. And the enemy model, based on self computing for 1 ply less and for the other side. (The black may play more conservatively).
It is a little bit egocentric, yes. Not entirely so though; the opponent is playing other colour.
Also, people don’t do this recursive modelling of the mood of the opponent due to lack of data. You can’t infer 1000 bits of information from 10 bits.
edit: this should be contrasted with typical game AIs that just makes the bot run around and shoot at people, without any self model, it knows you’re there, it walks there, it shoots. That is the typical concept of zombie.
I’m hesitant to call those models of any kind; they don’t include any kind of abstraction, either of the program’s internal state or of inferred enemy state. It’s just running the same algorithm on different initial conditions; granted, this is muddled a little because classical chess AI doesn’t have much internal state to speak of, just the state of the board and a tree of possible moves from there. Two copies of the same chess algorithm running against each other might be said to have a (uniquely perfect) model of their enemies, but that’s more or less accidental.
I’d have to disagree about humans not doing other-modeling, though. As best I can tell we evaluate our actions relative to others primarily based on how we believe those actions affect their disposition toward us, and then infer people’s actions and their effects on us from there. Few people take it much farther than that, but two or sometimes three levels of recursion is more than enough for this sort of modeling to be meaningful.
Actually they don’t have perfect models, the model does fewer moves ahead.
With regards to what people are doing, i mean, we don’t play chess like this. Yes, we model other people’s state, but quite badly. The people who overthink it fail horribly at social interaction.
With chess, you could blank out the lines 1 to 3 and 6 to 8 for first 10 moves, or the like, then you got some private states for AIs to model edit: or implement fog of war, pieces only see what they are attacking. Doesn’t make any fundamental differences here. Except now there’s some private state, and the things enemy knows and things enemy doesn’t know, and the assumptions enemy makes about where your pieces are, etc. (the private things are on the board, but so our private thoughts are inside our non-transparent skulls, on the board of universe)
The issue as i said earlier is that we have internal definition what self awareness is—something that humans all have, smart animals maybe have, and simple AIs can’t have, and then we try making some external definition that’d work like this without mentioning humans, except the world is not so convenient and what ever definition you make there’s simple AI that does it.
Yeah, that’s an acceptable way to give a chess AI internal state (or you could just use some parameters for its style of play, like I was discussing a few posts up). I’d call a chess AI that tracked its own state and made inferences about its opponent’s knowledge of it self-aware (albeit with a very simple self in a very simple set of rules), but I suspect you’d find this quite difficult to handle well in practice. Fog of war is almost universally ignored by AI in strategy games that implement it, for example.
Self-awareness isn’t magical, and it probably isn’t enough to solve the problem of consciousness, but I don’t think it’s as basic a concept as you’re implying either.
Ya, I thought about it. To me the special thing about self awareness is that I feel my mind internally in a way that I can not feel any of the other objects—yet I see it as just having some sort of loop inside that turns some of the internal data into qualia, an implementation detail. Without this I’d need to talk out loud for “I think therefore I am”, or infer existence of self from observing myself move my hands with my own eyes; that may be inconvenient and could impede the realization that there exist ‘self’, or require more thinking to determine that self exist, and could result in reduced reproductive fitness (self preservation could fail). But it would not exclude experiencing of the qualia (it could of made it hard to talk about the qualia).
My understanding is that p-zombies are devoid of qualia, but not of self-knowledge.
I have other loops, even those that most people lack—if i close my eyes i still see outline of my hands (without seeing colours or any other properties—just the outline), that’s a form of synaesthesia. I can’t turn it off any more than colour-graphene synaesthete can turn off his ‘syntax highlighting’.
This suggests the special thing about humans is (a) that they model other humans and (b) that model includes assuming the other person has an awareness of self; and that animals aren’t modeled as humans so we don’t start with the assumption.
I dunno, I can make game AI for in-game airplane that will model itself, the targets, the target AIs, and the target AI’s self modelling, and the target AI’s modelling of the attacker AI, and so on. And the AI will still be about as smart as a brain damaged fruit fly.
[ I can’t really prove that fruit flies, or parrots, or crows, or dogs, or apes, or other humans, do this kind of thing, but i have no reason whatsoever to presume that they can’t do that, if an AI that i write for a computer game can do it with quite little computing power]
The chess AI, for one thing, does just that, in it’s most pure form. When testing a potential move, it makes a move (in the memory), and then invokes other side’s model (adapted self model) to predict the other side’s move in that situation, and that invokes it’s self model to predict it’s own move, and so on.
With the animals… The cat on a chair next to me is cleaning herself. Cats like to be clean. etc. etc. I think we start with this assumption as children, and then we start wanting to be special / get taught we’re so special and shouldn’t care about animals / have to kill animals for food, and then we start redefining stuff in very odd ways so that humans be self aware, and nothing else would. And in result end up with self awareness being undefined coz the logical world is not very convenient and we can’t come up with any definition of self awareness so that some fairly stupid systems wouldn’t be self aware. That may well be also why consciousness and the like is so ill defined.
edit:
Perhaps we want a concise definition that fits specific purpose—humans are X [and perhaps other very big brained animals which we don’t need to be killing] but nothing else is X, we want to express this without appearing to make humans special. But to our dismay, such definition simply doesn’t exist. And we either end up with nonsense that allows for a world where everyone else is a p-zombie—the humans may not have X—or we end up with some good definition which unfortunately allows even very simple systems to be X, and then nobody wants to accept this definition.
This suggests that the special thing about humans is that they trigger each others’ human-detectors, and all else is rationalisation.
That doesn’t mean we have to care.
This doesn’t seem quite right. A relevant analogy to “self” in a chess AI wouldn’t be the black side, or the current configuration of the board from Black’s perspective, or even the tree of Black’s projected best moves; that’s all state, more analogous to bodies or worlds than selves. A better analogy to “self” would be the inferred characteristics of the algorithm running Black: how aggressive it is, whether it goes for positional play or captures, whether it really likes the Four Knights Game, and so forth.
Some chess AI does track that sort of thing. But it’s not remotely within the scope of classical chess AI as I understand it (bearing in mind that my only serious exposure to chess AI took a decidedly nonclassical approach), and I’ve never heard of a chess AI that tracked it recursively, factoring in inferred changes in the opponent’s behavior if the AI’s own parameters are tweaked. It’d be possible, of course, but very computationally intensive for all but the simplest games, and probably fairly low-fidelity given that computers don’t play much like humans in most of the games I’m aware of.
Why the algorithm itself doesn’t count as the self? The algorithm has a self model: the self computing for 2 ply less deep. And the enemy model, based on self computing for 1 ply less and for the other side. (The black may play more conservatively).
It is a little bit egocentric, yes. Not entirely so though; the opponent is playing other colour.
Also, people don’t do this recursive modelling of the mood of the opponent due to lack of data. You can’t infer 1000 bits of information from 10 bits.
edit: this should be contrasted with typical game AIs that just makes the bot run around and shoot at people, without any self model, it knows you’re there, it walks there, it shoots. That is the typical concept of zombie.
I’m hesitant to call those models of any kind; they don’t include any kind of abstraction, either of the program’s internal state or of inferred enemy state. It’s just running the same algorithm on different initial conditions; granted, this is muddled a little because classical chess AI doesn’t have much internal state to speak of, just the state of the board and a tree of possible moves from there. Two copies of the same chess algorithm running against each other might be said to have a (uniquely perfect) model of their enemies, but that’s more or less accidental.
I’d have to disagree about humans not doing other-modeling, though. As best I can tell we evaluate our actions relative to others primarily based on how we believe those actions affect their disposition toward us, and then infer people’s actions and their effects on us from there. Few people take it much farther than that, but two or sometimes three levels of recursion is more than enough for this sort of modeling to be meaningful.
Actually they don’t have perfect models, the model does fewer moves ahead.
With regards to what people are doing, i mean, we don’t play chess like this. Yes, we model other people’s state, but quite badly. The people who overthink it fail horribly at social interaction.
With chess, you could blank out the lines 1 to 3 and 6 to 8 for first 10 moves, or the like, then you got some private states for AIs to model edit: or implement fog of war, pieces only see what they are attacking. Doesn’t make any fundamental differences here. Except now there’s some private state, and the things enemy knows and things enemy doesn’t know, and the assumptions enemy makes about where your pieces are, etc. (the private things are on the board, but so our private thoughts are inside our non-transparent skulls, on the board of universe)
The issue as i said earlier is that we have internal definition what self awareness is—something that humans all have, smart animals maybe have, and simple AIs can’t have, and then we try making some external definition that’d work like this without mentioning humans, except the world is not so convenient and what ever definition you make there’s simple AI that does it.
Yeah, that’s an acceptable way to give a chess AI internal state (or you could just use some parameters for its style of play, like I was discussing a few posts up). I’d call a chess AI that tracked its own state and made inferences about its opponent’s knowledge of it self-aware (albeit with a very simple self in a very simple set of rules), but I suspect you’d find this quite difficult to handle well in practice. Fog of war is almost universally ignored by AI in strategy games that implement it, for example.
Self-awareness isn’t magical, and it probably isn’t enough to solve the problem of consciousness, but I don’t think it’s as basic a concept as you’re implying either.