I don’t at all think that this robot possesses full-blown human-style self-awareness, but, depending on the actual algorithms used in self-recognition, I think it passes the mirror test in a meaningful way.
For instance, if it learned to recognize itself in the mirror by moving around and noticing really strong correlations between it’s model of it’s own movements and the image it sees, then ultimately concludes that the image is a reflection of itself, then identifies visual changes about itself, then I would say that it has a self-model in a meaningful and important way. It doesn’t contextualize itself in a social setting, or model itself as having emotions, but it is self-representing.
It’s able to update a model of itself, but not recognize itself.
Towards the end of the video, I feel a similar amount to greater amount of sympathy for the robot’s “struggle” as I do for injured crustaceans. I also endorse that level of sympathy, and can’t really think of a meaningful functional difference between the two that puts the crustacean intelligence as being more important.
If it just took for granted that it was looking at itself, and updated a model of itself, it has some kind of self-model, but it seems less important.
I don’t at all think that this robot possesses full-blown human-style self-awareness, but, depending on the actual algorithms used in self-recognition, I think it passes the mirror test in a meaningful way.
For instance, if it learned to recognize itself in the mirror by moving around and noticing really strong correlations between it’s model of it’s own movements and the image it sees, then ultimately concludes that the image is a reflection of itself, then identifies visual changes about itself, then I would say that it has a self-model in a meaningful and important way. It doesn’t contextualize itself in a social setting, or model itself as having emotions, but it is self-representing.
This robot has a lower amount of self-modeling capability, I would say: http://www.youtube.com/watch?v=ehno85yI-sA
It’s able to update a model of itself, but not recognize itself.
Towards the end of the video, I feel a similar amount to greater amount of sympathy for the robot’s “struggle” as I do for injured crustaceans. I also endorse that level of sympathy, and can’t really think of a meaningful functional difference between the two that puts the crustacean intelligence as being more important.
If it just took for granted that it was looking at itself, and updated a model of itself, it has some kind of self-model, but it seems less important.
That’s a really good point. Your comment and thomblake’s comment do a pretty good job of dismantling my remark.