I’m not sure what you’re asking. A lot of reality doesn’t make sense to me, so that’s pretty weak evidence either way. And it does seem believable that, since there is a very wide range of consistency and dimensionality to human values that don’t seem well-correlated to intelligence, the same could be true of AIs.
I think this could reasonably be true for some definitions of “intelligence”, but that’s mostly because I have no idea how intelligence would be formalized anyways?
i think asking well-formed questions is useful but we shouldn’t confuse our well-formed question as being what we actually care about unless we are sure it is in fact what we care about
I’m not sure what you’re asking. A lot of reality doesn’t make sense to me, so that’s pretty weak evidence either way. And it does seem believable that, since there is a very wide range of consistency and dimensionality to human values that don’t seem well-correlated to intelligence, the same could be true of AIs.
I think this could reasonably be true for some definitions of “intelligence”, but that’s mostly because I have no idea how intelligence would be formalized anyways?
i think asking well-formed questions is useful but we shouldn’t confuse our well-formed question as being what we actually care about unless we are sure it is in fact what we care about