As someone who has published lots of writing that no longer reflects my views, I can certainly understand Eliezer’s insistence that it is obsolete. And indeed it is. On the other hand, I know a few SI people who think there are important points in CFAI not made anywhere else, and prefer its presentation of a few points to those in CEV. I won’t name names, but people are welcome to identify themselves.
I really appreciate CFAI and think everyone should read it, because it’s an example of a brilliant notice tackling the Friendly AI problem from scratch. It makes specific suggestions in abundance, something that authors of much of the machine ethics literature could only dream of. To truly understand the motivations for CEV it is necessary to understand what came before it, the CFAI proposals.
Eliezer has condemned, deleted, and/or simply refused to release a great amount of his excellent works, for instance short intros on the SIAI website, Algernon’s Law, and much else. This is brand control. Even the Singularitarian Principles have a big “obsolete” warning at the top, ostensibly for just a few sentences expressing support for “the Singularity” rather than a “friendly Singularity”.
Creating Friendly AI is what really inspired me to get involved in the pursuit of Friendly AI. I don’t know if the Sequences or CEV would have had as powerful of an effect.
Eliezer says no, but Anissimov disagrees. Starglider has a detailed criticism.
As someone who has published lots of writing that no longer reflects my views, I can certainly understand Eliezer’s insistence that it is obsolete. And indeed it is. On the other hand, I know a few SI people who think there are important points in CFAI not made anywhere else, and prefer its presentation of a few points to those in CEV. I won’t name names, but people are welcome to identify themselves.
The information I was looking for, dead on! Now I wonder about http://singinst.org/ourresearch/publications/GISAI/printable-GISAI.html
I really appreciate CFAI and think everyone should read it, because it’s an example of a brilliant notice tackling the Friendly AI problem from scratch. It makes specific suggestions in abundance, something that authors of much of the machine ethics literature could only dream of. To truly understand the motivations for CEV it is necessary to understand what came before it, the CFAI proposals.
Eliezer has condemned, deleted, and/or simply refused to release a great amount of his excellent works, for instance short intros on the SIAI website, Algernon’s Law, and much else. This is brand control. Even the Singularitarian Principles have a big “obsolete” warning at the top, ostensibly for just a few sentences expressing support for “the Singularity” rather than a “friendly Singularity”.
Creating Friendly AI is what really inspired me to get involved in the pursuit of Friendly AI. I don’t know if the Sequences or CEV would have had as powerful of an effect.