Self awareness—why is it discussed as so profound?

Some­thing I find rather odd—why is self-aware­ness usu­ally dis­cussed as some­thing profoundly mys­te­ri­ous and ad­vanced?

Peo­ple would gen­er­ally agree that a dog can be aware of food in the bowl, if the dog has seen or smelled it, or can be un­aware of a food bowl oth­er­wise. One would think that a dog can be aware of it­self in so much as dog can be aware of any­thing else in the world, like food in the bowl. There isn’t great deal of ar­gu­ment about dog’s aware­ness of food.

Yet the ques­tion when­ever dog has ‘self aware­ness’ quickly turns into de­bate of opinions and lan­guage and shift­ing defi­ni­tions of what ‘self aware­ness’ is, and ir­rele­van­cies such as the ques­tion when­ever the dog is smart enough to figure out how mir­ror works well enough to iden­tify a paint blotch on it­self1 , or the re­quests that it be shown be­yond all doubt that dog’s mind is aware of dog’s own mind, which is some­thing that you can deny other hu­mans just as suc­cess­fully.

I find it rather puz­zling.

My first the­ory is to as­sume that it is just a case of avoid­ing the thought due to it’s con­se­quences vs the sta­tus quo. The sta­tus quo is that we, with­out giv­ing it much thought, de­cided that self aware­ness is uniquely hu­man qual­ity, and then care­lessly made our moral­ity sound more uni­ver­sal by say­ing that the self aware en­tities are en­ti­tled to the rights. At same time we don’t care too much about other an­i­mals.

At this point, hav­ing well ‘es­tab­lished’ no­tions in our head—which weren’t quite ra­tio­nally es­tab­lished but just sort of hap­pened over the time—we don’t so much try to ac­tu­ally think or ar­gue about self aware­ness as try to define the self aware­ness so that hu­mans are self aware, and dogs aren’t yet the defi­ni­tion sounds gen­eral—or try to fight such defi­ni­tions—de­pend­ing to our feel­ing to­wards dogs.

I think it is a case of gen­eral prob­lem with rea­son­ing. When there’s es­tab­lished sta­tus quo—which has sort of evolved his­tor­i­cally—we can have real trou­ble think­ing about it, rather than try to make up some new defi­ni­tions which sound as if they ex­isted from the start and the sta­tus quo was jus­tified by those defi­ni­tions.

This gets prob­le­matic when we have to think about self aware­ness for other pur­poses, such as AI.

1: I don’t see how the mir­ror self-recog­ni­tion test im­plies any­thing about self aware­ness. You pick an an­i­mal that grooms it­self, you see if that an­i­mal can groom it­self us­ing the mir­ror. That can work even if the an­i­mal only iden­ti­fies what it wants to groom, with what it sees in the mir­ror, with­out iden­ti­fy­ing ei­ther with self (what­ever that means). Or that can fail, if the an­i­mal doesn’t have good enough pat­tern match­ing to match those items, even if the an­i­mal iden­ti­fies what it grooms with self and has a con­cept of self.

Fur­ther­more the an­i­mal that just wants to groom some ob­ject which is con­stantly nearby and groom­ing of which feels good, could, if ca­pa­ble of lan­guage, in­vent a name for this ob­ject—“foo­bar”—and then when mak­ing dic­tio­nary we’d not think twice about trans­lat­ing “foo­bar” as self.

edit: Also, i’d say, self recog­ni­tion com­pli­cates our model of the mir­rors, in the “why mir­ror swaps left and right rather than up and down?” way. If you look at the room in the mir­ror, ob­vi­ously mir­ror swaps front and back. Clear as day. If you look at ‘self’ in the mir­ror, there’s this self stand­ing here fac­ing you, and it’s left side is swapped with it’s right side. And the usual model of mir­ror is ro­ta­tion of 180 de­grees around ver­ti­cal axis, not hori­zon­tal axis, fol­lowed by swap­ping of left and right but not up and down. You have more com­pli­cated, more con­fus­ing model of mir­ror, likely be­cause you rec­og­nized the bilat­er­ally sym­met­ric your­self in it.