Just a brief mention since we’re supposed to avoid AI for a while, but it is too relevant to this post to totally ignore: I just finished J Storrs Hall’s “Beyond AI”, the overlap and differences with Eliezer’s FAI are very interesting, and it is a very readable book.
EDIT: You all might notice I did write “overlap and differences”; I noticed the differences, but I do think they are interesting; not least because they seem similar to some of Robin’s criticisms of Eliezer’s FAI.
I’ve read it too, but made the mistake of reading it right after Godel, Escher, Bach. Hard to compare.
What surprised me most was how much of the things written in a book published in 2007 were more or less the same as those in a book published in 1979. I expected more new promising developments since then, and that was a bit of a downer.
I think that’s inevitable, if for no other reason that someone reading two treatments of one subject that they don’t completely understand is likely to interpret them in a correlated way. They may make similar assumptions in both cases; or they may understand the one they read first, and try to interpret the one they read second in a similar way.
Hall gives a passable history of AI, acts as a messenger for a lot of standard AI ideas, including the Dennett compatibilist account of free will and some criticisms of nonreductionist accounts of consciousness, and acts as a messenger for a stew of social science ideas, e.g. social capital and transparent motivations, although the applicability of the latter is often questionable. Those sections aren’t bad.
It’s only when he gets to considering the dynamics of powerful intelligences and offers up original ideas that he makes glaring errors. Since that’s your specialty, those mistakes stand out as horribly egregious, while casual readers might miss them or think them outweighed by the other sections of the book.
I see differences between you and Drescher, or you and Greene, both in substance (e.g. some clear errors in Drescher’s book when he discusses the ethical value of rock-minds, neglecting the possibility that happy experiences of others could figure in our utility functions directly, rather than only through game theoretic interactions with powerful agents) and in presentation/formalization/frameworks.
We could try to quantify percentage overlap in views on specific questions.
Just a brief mention since we’re supposed to avoid AI for a while, but it is too relevant to this post to totally ignore: I just finished J Storrs Hall’s “Beyond AI”, the overlap and differences with Eliezer’s FAI are very interesting, and it is a very readable book.
EDIT: You all might notice I did write “overlap and differences”; I noticed the differences, but I do think they are interesting; not least because they seem similar to some of Robin’s criticisms of Eliezer’s FAI.
I’ve read it too, but made the mistake of reading it right after Godel, Escher, Bach. Hard to compare.
What surprised me most was how much of the things written in a book published in 2007 were more or less the same as those in a book published in 1979. I expected more new promising developments since then, and that was a bit of a downer.
I think I see a lot more difference between my own work and others’ work than some of my readers may.
I think that’s inevitable, if for no other reason that someone reading two treatments of one subject that they don’t completely understand is likely to interpret them in a correlated way. They may make similar assumptions in both cases; or they may understand the one they read first, and try to interpret the one they read second in a similar way.
Hall gives a passable history of AI, acts as a messenger for a lot of standard AI ideas, including the Dennett compatibilist account of free will and some criticisms of nonreductionist accounts of consciousness, and acts as a messenger for a stew of social science ideas, e.g. social capital and transparent motivations, although the applicability of the latter is often questionable. Those sections aren’t bad.
It’s only when he gets to considering the dynamics of powerful intelligences and offers up original ideas that he makes glaring errors. Since that’s your specialty, those mistakes stand out as horribly egregious, while casual readers might miss them or think them outweighed by the other sections of the book.
I see differences between you and Drescher, or you and Greene, both in substance (e.g. some clear errors in Drescher’s book when he discusses the ethical value of rock-minds, neglecting the possibility that happy experiences of others could figure in our utility functions directly, rather than only through game theoretic interactions with powerful agents) and in presentation/formalization/frameworks.
We could try to quantify percentage overlap in views on specific questions.
This is a good example of why I don’t bother to cite what others perceive as “related work”, frankly.