I suspect that the belief in belief is more attractive to mammalian brains than to AI. We have a lizard brain wrapped in an emotional brain wrapped in a somewhat general purpose computer (neocortex). Intellectual humans are emotional beings trying hard to learn effective ways to program the neocortex. We have to do a lot of wacky stuff to make progress, and things we try that make progress in one direction, like a belief in belief, will generally cost us on progress in other directions, like maintaining an openness appropriate to the fact that our maps are astoundingly less complex than the territory being mapped.
But the AI is the rationalists baby, and we obsess over preserving as much of the baby while tossing as much of the bathwater as possible. Sure, we don’t want the baby to make the same mistakes we made, but we want it to be able to exceed us in many ways, so we necessarily leave open doors behind which we know not what lies. I imagine this is what the gods were thinking about when they decided to imbue their AI’s with free will.
More than a belief in belief trap for our own constructions, I would worry about some new flaw appearing. If the groups around here have any say, there will probably be a bias toward Bayesian reasoning in an AI which will likely be poorly suited for believing in belief. But the map is not the territory, and Bayesian probability is a mapping technique. Who can even imagine what part of the territory the Bayesian mappers will miss entirely or mismap systematically?
Finally, a Christian AI would hardly be a disaster. There’s been quite considerable intellectual progress made by self-described Christians. Sure, if the AI gets into an “I’ve got to devote all my cycles to converting people” loop, that sort of sucks, but even in bio intelligences that tends to just be a phase, especially in the better minds. The truth about maps and territories seems to be that no matter how badly you map Russia, it is your map of America that determines your ability to exploit that continent. That is, people who have a very different map of the supernatural than us have shown no consistent deficit in developing their understanding of the natural world. Listen to a physicist tell you why he believes that a Grand Unified Theory is so likely, or contemplate the very common belief that the best theories are beautiful. We rationalists believe in belief and have our own religions. We have to to get anywhere because that is how our brains are built.
Maybe our version of AIs will be quite defective, but better than our minds, and it will be a few generations after the singularity that truly effective intelligences are built. Maybe there is no endpoint until there is a single intelligence using all the resources of the universe in its operation, until the mapper is as complex as the universe, or at least as complex as it can be in this universe.
But that sounds like at least some aspects of the Christian god, doesn’t it.
I suspect that the belief in belief is more attractive to mammalian brains than to AI. We have a lizard brain wrapped in an emotional brain wrapped in a somewhat general purpose computer (neocortex). Intellectual humans are emotional beings trying hard to learn effective ways to program the neocortex. We have to do a lot of wacky stuff to make progress, and things we try that make progress in one direction, like a belief in belief, will generally cost us on progress in other directions, like maintaining an openness appropriate to the fact that our maps are astoundingly less complex than the territory being mapped.
But the AI is the rationalists baby, and we obsess over preserving as much of the baby while tossing as much of the bathwater as possible. Sure, we don’t want the baby to make the same mistakes we made, but we want it to be able to exceed us in many ways, so we necessarily leave open doors behind which we know not what lies. I imagine this is what the gods were thinking about when they decided to imbue their AI’s with free will.
More than a belief in belief trap for our own constructions, I would worry about some new flaw appearing. If the groups around here have any say, there will probably be a bias toward Bayesian reasoning in an AI which will likely be poorly suited for believing in belief. But the map is not the territory, and Bayesian probability is a mapping technique. Who can even imagine what part of the territory the Bayesian mappers will miss entirely or mismap systematically?
Finally, a Christian AI would hardly be a disaster. There’s been quite considerable intellectual progress made by self-described Christians. Sure, if the AI gets into an “I’ve got to devote all my cycles to converting people” loop, that sort of sucks, but even in bio intelligences that tends to just be a phase, especially in the better minds. The truth about maps and territories seems to be that no matter how badly you map Russia, it is your map of America that determines your ability to exploit that continent. That is, people who have a very different map of the supernatural than us have shown no consistent deficit in developing their understanding of the natural world. Listen to a physicist tell you why he believes that a Grand Unified Theory is so likely, or contemplate the very common belief that the best theories are beautiful. We rationalists believe in belief and have our own religions. We have to to get anywhere because that is how our brains are built.
Maybe our version of AIs will be quite defective, but better than our minds, and it will be a few generations after the singularity that truly effective intelligences are built. Maybe there is no endpoint until there is a single intelligence using all the resources of the universe in its operation, until the mapper is as complex as the universe, or at least as complex as it can be in this universe.
But that sounds like at least some aspects of the Christian god, doesn’t it.