Could an AI be Religious?

I was curious if GPTChat (or something like it) could be used for religious proselytizing. And lo and behold it can.

The above is a bit of a humorous example. But I expect this to be a real use case. And it made me think seriously about the impact of religion on AIs.

It’s quite clear an AI could be trained to “have” religious beliefs. Whether this would constitute true belief in the way a human would understand it is hard to say. As with much in AI, it’s hard to map the outputs of a model to human cognitive processes.

Indeed, the whole idea of a religious AI may seem absurd at first glance. AIs are a product of human technology. We typically imagine them as perfectly “rational” agents. But, it isn’t quite so simple.

Religious beliefs (like beliefs generally) are probabilistic. No one can be completely certain if there is or is not a God. That state of the world has some non zero probability of being true. So it’s worth considering how an AI that concludes there is a 5% chance of God existing would behave.

For example, would an AI be amenable to a version of Pascals mugging? If we instructed an AI to maximize human wellbeing would it attempt to baptize every human to guarantee them salvation? Would a paperclip maximizer do the same in hopes of being rewarded with infinite paperclips? Would a general AI that is programmed to be moral give substantial weight to religious morality in its decision making?

The answer to all of these is very likely yes. It’s quite easy to see how an expectation maximizing agent might come to the first two conclusions. And the third likely also holds quite generally.

An AI that was programmed to be moral would (like a human) seek to do so by carrying out its moral duties. The challenge it would face is that there’s no good (secular) evidence what those moral duties are. This is a problem that has been discussed by many philosophers, and is often referred to as Hume’s guillotine.

Religious revelation is one of the only ways to bridge is and ought. And so might be given heavy weight in an AI’s decision making. Since there is no “opportunity cost” in adhering to religious morality from that perspective.

This isn’t necessarily bad. The possibility of some higher power might plausibly mitigate AI based X-risk. But it’s an issue that’s both potentially important and that our community is perhaps not naturally disposed to think about.