I think that the link from micro to macro is too weak for this to be a useful line of inquiry. “intelligence” applies on a level of abstraction that is difficult (perhaps impossible for human-level understanding) to predict/define in terms of neural configuration, let alone Turing-machine or quantum descriptions.
I’m not sure what you’re asking. A lot of reality doesn’t make sense to me, so that’s pretty weak evidence either way. And it does seem believable that, since there is a very wide range of consistency and dimensionality to human values that don’t seem well-correlated to intelligence, the same could be true of AIs.
I think this could reasonably be true for some definitions of “intelligence”, but that’s mostly because I have no idea how intelligence would be formalized anyways?
i think asking well-formed questions is useful but we shouldn’t confuse our well-formed question as being what we actually care about unless we are sure it is in fact what we care about
I think that the link from micro to macro is too weak for this to be a useful line of inquiry. “intelligence” applies on a level of abstraction that is difficult (perhaps impossible for human-level understanding) to predict/define in terms of neural configuration, let alone Turing-machine or quantum descriptions.
I’m not sure what you’re asking. A lot of reality doesn’t make sense to me, so that’s pretty weak evidence either way. And it does seem believable that, since there is a very wide range of consistency and dimensionality to human values that don’t seem well-correlated to intelligence, the same could be true of AIs.
I think this could reasonably be true for some definitions of “intelligence”, but that’s mostly because I have no idea how intelligence would be formalized anyways?
i think asking well-formed questions is useful but we shouldn’t confuse our well-formed question as being what we actually care about unless we are sure it is in fact what we care about