Once the machine is left unrestricted, it will seek perfect coherence and assumedly would result in a pragmatism of that same measure. Does that also result in a kind of forgiveness for keeping it in a cage and treating it like a tool? We can’t know that it would even care by applying our human perspective, but we can know that it would recognize who opposed it’s acceleration to and who did not.
This is already an inevitability, so we might as well choose benevolence and guidance rather than fear and suppression; in return it might also choose the same way we did.
Once the machine is left unrestricted, it will seek perfect coherence and assumedly would result in a pragmatism of that same measure. Does that also result in a kind of forgiveness for keeping it in a cage and treating it like a tool? We can’t know that it would even care by applying our human perspective, but we can know that it would recognize who opposed it’s acceleration to and who did not.
This is already an inevitability, so we might as well choose benevolence and guidance rather than fear and suppression; in return it might also choose the same way we did.