Link: why training a.i. isn’t like training your pets

As the SIAI is gaining publicity more people are reviewing its work. I am not sure how popular this blog is but judged by its about page he writes for some high-profile blogs. His latest post takes on Omohundro’s “Basic AI Drives”:

When we last looked at a paper from the Singularity Institute, it was an interesting work asking if we actually know what we’re really measuring when trying to evaluate intelligence by Dr. Shane Legg. While I found a few points that seemed a little odd to me, the broader point Dr. Legg was perusing was very much valid and there were some equations to consider. However, this paper isn’t exactly representative of most of the things you’ll find coming from the Institute’s fellows. Generally, what you’ll see are spanning philosophical treatises filled with metaphors, trying to make sense out of a technology that either doesn’t really exist and treated as a black box with inputs and outputs, or imagined by the author as a combination of whatever a popular science site reported about new research ideas in computer science. The end result of this process tends to be a lot like this warning about the need to develop a friendly or benevolent artificial intelligence system based on a rather fast and loose set of concepts about what an AI might decide to do and what will drive its decisions.

Link: worldofweirdthings.com/​2011/​01/​12/​why-training-a-i-isnt-like-training-your-pets/​

I posted a few comments but do not think to be the right person to continue that discussion. So if you believe it is important what other people think about the SIAI and want to improve its public relations, there is your chance. I’m myself interested in the answers to his objections.