Eliezer, after you realized that attempting to build a Friendly AI is harder and more dangerous than you thought, how far did you back-track in your decision tree? Specifically, did it cause you to re-evaluate general Singularity strategies to see if AI is still the best route? You wrote the following on Dec 9 2002, but it’s hard to tell whether it’s before or after your “late 2002” realization.
I for one would like to see research organizations pursuing human intelligence enhancement, and would be happy to offer all the ideas I thought up for human enhancement when I was searching through general Singularity strategies before specializing in AI, if anyone were willing to cough up, oh, at least a hundred million dollars per year to get started, and if there were some way to resolve all the legal problems with the FDA.
Hence the Singularity Institute “for Artificial Intelligence”. Humanity is simply not paying enough attention to support human enhancement projects at this time, and Moore’s Law goes on ticking.
Aha, a light bulb just went off in my head. Eliezer did reevaluate, and this blog is his human enhancement project!
Why doesn’t Zaire just divide himself in half, let each half get 1⁄4 of the pie, then merge back together and be in possession of half of the pie?
Or, Zaire might say: Hey guys, my wife just called and told me that she made a blueberry pie this morning and put it in this forest for me to find. There’s a label on the bottom of the plate if you don’t believe me. Do you still think ‘fair’ = ‘equal division’?
Or maybe Zaire came with his dog, and claims that the dog deserves an equal share.
I appreciate the distinction Eliezer is trying to draw between the object level and the meta level. But why the assumption that the object-level procedure will be simple?