I don’t at all think that this robot possesses full-blown human-style self-awareness, but, depending on the actual algorithms used in self-recognition, I think it passes the mirror test in a meaningful way.
For instance, if it learned to recognize itself in the mirror by moving around and noticing really strong correlations between it’s model of it’s own movements and the image it sees, then ultimately concludes that the image is a reflection of itself, then identifies visual changes about itself, then I would say that it has a self-model in a meaningful and important way. It doesn’t contextualize itself in a social setting, or model itself as having emotions, but it is self-representing.
This robot has a lower amount of self-modeling capability, I would say: http://www.youtube.com/watch?v=ehno85yI-sA
It’s able to update a model of itself, but not recognize itself.
Towards the end of the video, I feel a similar amount to greater amount of sympathy for the robot’s “struggle” as I do for injured crustaceans. I also endorse that level of sympathy, and can’t really think of a meaningful functional difference between the two that puts the crustacean intelligence as being more important.
If it just took for granted that it was looking at itself, and updated a model of itself, it has some kind of self-model, but it seems less important.
Direct advice for young (= precollege) people. They have pretty much their whole life ahead of them, and if you can get to them and give them advice before they start down some potentially limiting major life path/choice then its a huge gain. I personally want a bit of help with this...
You could do something yourself, but if you get 2 people to do that same thing then you effectively dedicate 2 more lifetimes to the effort. And doing so doesn’t even eat up that much of your own time.