Ignoring the present problems with CEV, which are still deep and insufficiently understood to give any final judgment on that project, the relevant point is that CEV is supposed to solve the problem of existential threat of non-friendly AI, not to achieve improvements on the present human condition. In other words, it’s an attempt to figure out how to ensure that an AI, if implemented, won’t turn us into dog food, not a pseudoscientific recipe for building utopia here and now (which would be just as insane as all such previous ideas).
Assuming an AI will be implemented at some point, CEV will be a preferable alternative to being turned into dog food, and—as a wild speculation—in the hands of a superintelligence, its results might perhaps not even be that bad by other standards. But all this is extremely far-fetched in any case.
Ignoring the present problems with CEV, which are still deep and insufficiently understood to give any final judgment on that project, the relevant point is that CEV is supposed to solve the problem of existential threat of non-friendly AI, not to achieve improvements on the present human condition. In other words, it’s an attempt to figure out how to ensure that an AI, if implemented, won’t turn us into dog food, not a pseudoscientific recipe for building utopia here and now (which would be just as insane as all such previous ideas).
Assuming an AI will be implemented at some point, CEV will be a preferable alternative to being turned into dog food, and—as a wild speculation—in the hands of a superintelligence, its results might perhaps not even be that bad by other standards. But all this is extremely far-fetched in any case.
what’s your problem with utopia? don’t you like nice things?