This is really just a robot built to pass a specific test. This isn’t that different from robot programmed to say “I’m aware and am aware of my own awareness.” Don’t confuse a useful proxy test with genuine self-awareness.
I don’t at all think that this robot possesses full-blown human-style self-awareness, but, depending on the actual algorithms used in self-recognition, I think it passes the mirror test in a meaningful way.
For instance, if it learned to recognize itself in the mirror by moving around and noticing really strong correlations between it’s model of it’s own movements and the image it sees, then ultimately concludes that the image is a reflection of itself, then identifies visual changes about itself, then I would say that it has a self-model in a meaningful and important way. It doesn’t contextualize itself in a social setting, or model itself as having emotions, but it is self-representing.
It’s able to update a model of itself, but not recognize itself.
Towards the end of the video, I feel a similar amount to greater amount of sympathy for the robot’s “struggle” as I do for injured crustaceans. I also endorse that level of sympathy, and can’t really think of a meaningful functional difference between the two that puts the crustacean intelligence as being more important.
If it just took for granted that it was looking at itself, and updated a model of itself, it has some kind of self-model, but it seems less important.
Based on what I’ve seen before with Nico, I’m guessing it was able to figure out that the reflection was giving it information about itself, and then updated its self-model based on a change in the reflection.
I don’t know what “genuine self-awareness” is, but this is a lot different from a robot programmed to say “I’m aware and am aware of my own awareness.”
This is really just a robot built to pass a specific test. This isn’t that different from robot programmed to say “I’m aware and am aware of my own awareness.” Don’t confuse a useful proxy test with genuine self-awareness.
I don’t at all think that this robot possesses full-blown human-style self-awareness, but, depending on the actual algorithms used in self-recognition, I think it passes the mirror test in a meaningful way.
For instance, if it learned to recognize itself in the mirror by moving around and noticing really strong correlations between it’s model of it’s own movements and the image it sees, then ultimately concludes that the image is a reflection of itself, then identifies visual changes about itself, then I would say that it has a self-model in a meaningful and important way. It doesn’t contextualize itself in a social setting, or model itself as having emotions, but it is self-representing.
This robot has a lower amount of self-modeling capability, I would say: http://www.youtube.com/watch?v=ehno85yI-sA
It’s able to update a model of itself, but not recognize itself.
Towards the end of the video, I feel a similar amount to greater amount of sympathy for the robot’s “struggle” as I do for injured crustaceans. I also endorse that level of sympathy, and can’t really think of a meaningful functional difference between the two that puts the crustacean intelligence as being more important.
If it just took for granted that it was looking at itself, and updated a model of itself, it has some kind of self-model, but it seems less important.
That’s a really good point. Your comment and thomblake’s comment do a pretty good job of dismantling my remark.
Based on what I’ve seen before with Nico, I’m guessing it was able to figure out that the reflection was giving it information about itself, and then updated its self-model based on a change in the reflection.
I don’t know what “genuine self-awareness” is, but this is a lot different from a robot programmed to say “I’m aware and am aware of my own awareness.”
So basically, it shows that the robot understands what a mirror is.
Depends on how it did it, doesn’t it? More details are necessary for sure to conclude anything.
If it turns out to actually be self-aware, but isn’t that bright, then the field of AI suddenly gets very interesting and scary.