Agreed. I get same feeling basically, on top of which it feels to me that the formalization of fuzzily defined goal systems, be it FAI or paperclip maximizer, may well be impossible in practice (nobody can do it even in a toy model given infinite computing power!), leaving us with either the neat AIs that implement something like ‘maximize own future opportunities’ (the AI will have to be able to identify separate courses of action to begin with), or altogether with some messy AIs (neural network, cortical column network, et cetera) for which none of the argument is applicable. If I put speculative hat on, I can make up argument that the AI will be a Greenpeace activist just as well, by considering what the simplest self protective goal systems may be (and discarding the bias that the AI is self aware in man-like way)
Agreed. I get same feeling basically, on top of which it feels to me that the formalization of fuzzily defined goal systems, be it FAI or paperclip maximizer, may well be impossible in practice (nobody can do it even in a toy model given infinite computing power!), leaving us with either the neat AIs that implement something like ‘maximize own future opportunities’ (the AI will have to be able to identify separate courses of action to begin with), or altogether with some messy AIs (neural network, cortical column network, et cetera) for which none of the argument is applicable. If I put speculative hat on, I can make up argument that the AI will be a Greenpeace activist just as well, by considering what the simplest self protective goal systems may be (and discarding the bias that the AI is self aware in man-like way)