I found[1] Eliezer’s 2012 comment where he talked about why he didn’t want FAI to solve philosophical problems for itself:
I have been publicly and repeatedly skeptical of any proposal to make an AI compute the answer to a philosophical question you don’t know how to solve yourself, not because it’s impossible in principle, but because it seems quite improbable and definitely very unreliable to claim that you know that computation X will output the correct answer to a philosophical problem and yet you’ve got no idea how to solve it yourself. Philosophical problems are not problems because they are well-specified and yet too computationally intensive for any one human mind. They’re problems because we don’t know what procedure will output the right answer, and if we had that procedure we would probably be able to compute the answer ourselves using relatively little computing power. Imagine someone telling you they’d written a program requiring a thousand CPU-years of computing time to solve the free will problem.
Using my recently resurrected LW Power Reader & User Archive userscript. The User Archive part allows one to create an offline archive (in browser storage) of someone’s complete LW content and then do a search like /philosoph/ replyto:Wei_Dai
I found[1] Eliezer’s 2012 comment where he talked about why he didn’t want FAI to solve philosophical problems for itself:
Interesting to compare this to my Some Thoughts on Metaphilosophy, where I argued for the opposite.
Using my recently resurrected LW Power Reader & User Archive userscript. The User Archive part allows one to create an offline archive (in browser storage) of someone’s complete LW content and then do a search like /philosoph/ replyto:Wei_Dai