One of the many reasons that I will win my bet with Eliezer is that it is impossible for an AI to understand itself. If it could, it would be able to predict it’s own actions, and this is a logical contradiction, just as it is for us.
I don’t see a logical contradiction here. And we have examples in nature of beings able to understand themselves very well: humans are a good example. People predict their own actions all the time. For example, I predict that after I finish typing this message I am going to hit comment and then get up and refill my glass of orange juice. Moreover, human understanding of ourselves has improved and has allowed us to optimize ourselves. For example, all the cognitive biases which we frequently discuss here are examples of humans understanding our own architecture and improving our processing. We also deliberately improve ourselves by playing games or doing specific mental exercises designed to improve specific mental skills. Soon we will more directly improve our cognitive structures by genetic engineering (we’ve already identified multiple examples of small genetic changes that can make rodents much smarter than they normally are (see this example or this one)). In general, claiming something is a logical contradiction when it occurs in reality is not a great idea.
I don’t see a logical contradiction here. And we have examples in nature of beings able to understand themselves very well: humans are a good example. People predict their own actions all the time. For example, I predict that after I finish typing this message I am going to hit comment and then get up and refill my glass of orange juice. Moreover, human understanding of ourselves has improved and has allowed us to optimize ourselves. For example, all the cognitive biases which we frequently discuss here are examples of humans understanding our own architecture and improving our processing. We also deliberately improve ourselves by playing games or doing specific mental exercises designed to improve specific mental skills. Soon we will more directly improve our cognitive structures by genetic engineering (we’ve already identified multiple examples of small genetic changes that can make rodents much smarter than they normally are (see this example or this one)). In general, claiming something is a logical contradiction when it occurs in reality is not a great idea.
See my response to wedrifid.