In particular, I strongly suspect that acausal norms are not so compelling that AI technologies would automatically discover and obey them. So, if your aim in reading this post was to find a comprehensive solution to AI safety, I’m sorry to say I don’t think you will find it here.
To make sure I understand, would this mean that the AI technologies would be acting suboptimally, in the sense they could achieve their goals better if they joined the aucausal economy?
To make sure I understand, would this mean that the AI technologies would be acting suboptimally, in the sense they could achieve their goals better if they joined the aucausal economy?