I agree with Holden and additionally it looks like AGI discussions have most of the properties of mindkilling.
These discussions are about policy. They are about policy affecting medium-to-far future. These policies cannot be founded in reliably scientific evidence. Bayesian inquiry heavily depends on priors, and there is nowhere near anough data for tipping the point.
As someone who practices programming and has studied CS, I find Hanson and AI researchers and Holden more convincing than Eliezer_Yudkowsky or lukeprog. But this is more prior-based than evidence-based. Nearly all that the arguments by both sides do is just bringing a system to your priors. I cannot judge which side gives more odds-changing data because arguments from one side make way more sense and I cannot factor out the original prior dissonance with the other side.
The arguments about “optimization done better” don’t tell us anything about position of fundamental limits to each kind of optimization; with a fixed computronium type it is not clear that any kind of head start would ensure that a single instance of AI would beat an instance based on 10x computronium older than 1 week (and partitioning the world’s computer power for a month requires just a few ships with conveniently dropped anchors—we have seen it before, on a bit smaller scale). The limits can be further, but it is hard to be sure.
It may be that I fail to believe some parts of arguments because my priors are too strongly tipped. But Holden who has read most of the sequences without prior strong opinion wasn’t convinced. This seems to support the theory of there being little mind-changing arguments.
Unfortunately, Transhumanist Wiki returns an error for a long time, so I cannot link to relatively recent “So you want to be a Seed AI Programmer” by Eliezer_Yudkowsky. If I say what I remembered best from there that made me more ready to discount SIAI-side priors it would be arguing with a fixed bottom line. I guess WebArchive version ( http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer ) should be quite OK—or is it missing important edits? Actually, it is a lot of content which puts Singularity arguments in slightly another light; maybe it should be either declared obsolete in public or saved at http://wiki.lesswrong.com/ for everyone who wants to read it.
I repeat once more that I consider most of the discussion to be caused by different priors and unshareable personal experiences. Personally me agreeing with Holden can give you only the information that a person like me can (not necessarily will) have such priors. If you agree with me, you cannot use me to check your reasons; if you disagree with me, I cannot convince you and you cannot convince me—not at our current state of knowledge.
relatively recent “So you want to be a Seed AI Programmer” by Eliezer_Yudkowsky [...] maybe it should be either declared obsolete in public
(I believe that document was originally written circa 2002 or 2003, the copy mirrored from the Transhumanist Wiki (which includes comments as recent as 2009) being itself a mirror. “Obsolete” seems accurate.)
I agree with Holden and additionally it looks like AGI discussions have most of the properties of mindkilling.
These discussions are about policy. They are about policy affecting medium-to-far future. These policies cannot be founded in reliably scientific evidence. Bayesian inquiry heavily depends on priors, and there is nowhere near anough data for tipping the point.
As someone who practices programming and has studied CS, I find Hanson and AI researchers and Holden more convincing than Eliezer_Yudkowsky or lukeprog. But this is more prior-based than evidence-based. Nearly all that the arguments by both sides do is just bringing a system to your priors. I cannot judge which side gives more odds-changing data because arguments from one side make way more sense and I cannot factor out the original prior dissonance with the other side.
The arguments about “optimization done better” don’t tell us anything about position of fundamental limits to each kind of optimization; with a fixed computronium type it is not clear that any kind of head start would ensure that a single instance of AI would beat an instance based on 10x computronium older than 1 week (and partitioning the world’s computer power for a month requires just a few ships with conveniently dropped anchors—we have seen it before, on a bit smaller scale). The limits can be further, but it is hard to be sure.
It may be that I fail to believe some parts of arguments because my priors are too strongly tipped. But Holden who has read most of the sequences without prior strong opinion wasn’t convinced. This seems to support the theory of there being little mind-changing arguments.
Unfortunately, Transhumanist Wiki returns an error for a long time, so I cannot link to relatively recent “So you want to be a Seed AI Programmer” by Eliezer_Yudkowsky. If I say what I remembered best from there that made me more ready to discount SIAI-side priors it would be arguing with a fixed bottom line. I guess WebArchive version ( http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer ) should be quite OK—or is it missing important edits? Actually, it is a lot of content which puts Singularity arguments in slightly another light; maybe it should be either declared obsolete in public or saved at http://wiki.lesswrong.com/ for everyone who wants to read it.
I repeat once more that I consider most of the discussion to be caused by different priors and unshareable personal experiences. Personally me agreeing with Holden can give you only the information that a person like me can (not necessarily will) have such priors. If you agree with me, you cannot use me to check your reasons; if you disagree with me, I cannot convince you and you cannot convince me—not at our current state of knowledge.
(I believe that document was originally written circa 2002 or 2003, the copy mirrored from the Transhumanist Wiki (which includes comments as recent as 2009) being itself a mirror. “Obsolete” seems accurate.)