In practice, FAI is just another Pascal’s mugging/ lifetime dilemma/ St. Petersburg Paradox. From XiXIDu’s blog:
To be clear, extrapolations work and often are the best we can do. But since there are problems such as the above, that we perceive to be undesirable and that lead to absurd consequences, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics.
[...]
Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.
Until [various rationality puzzles] are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intuition than on logical implications, if only because I still lack the necessary educational background to trust my comprehension and judgement of the various underlying concepts and methods used to arrive at those implications.
I would also be very interested in seeing some smaller stepping stones implemented—I imagine that creating an AGI (let alone FAI) will require massive amounts of maths, proofs and the like. It seems very useful to create artificialy intelligent mathematics software that can ‘discover’ and proof interesting theorems (and explain its steps). Of course, there is software that can proof relatively simple proofs, but there’s nothing that could proof e.g. Fermat’s Last Theorem—we still need very smart humans for that.
Of course, it’s extremely hard to create such software, but it would be much easier than AGI/FAI, and at the same time it can help with constructing those (and help in some other areas, say QM). The difficulty in constructing such software might also give us some understanding in the difficulties of constructing general artificial intelligence.
Too much theory, not enough empirical evidence. In theory, FAI is an urgent problem that demands most of our resources (Eliezer is on the record saying that the only two legitimate occupations are working on FAI, and earning lots of money so you can donate money to other people working on FAI).
In practice, FAI is just another Pascal’s mugging/ lifetime dilemma/ St. Petersburg Paradox. From XiXIDu’s blog:
Added.
I would also be very interested in seeing some smaller stepping stones implemented—I imagine that creating an AGI (let alone FAI) will require massive amounts of maths, proofs and the like. It seems very useful to create artificialy intelligent mathematics software that can ‘discover’ and proof interesting theorems (and explain its steps). Of course, there is software that can proof relatively simple proofs, but there’s nothing that could proof e.g. Fermat’s Last Theorem—we still need very smart humans for that.
Of course, it’s extremely hard to create such software, but it would be much easier than AGI/FAI, and at the same time it can help with constructing those (and help in some other areas, say QM). The difficulty in constructing such software might also give us some understanding in the difficulties of constructing general artificial intelligence.