Hasn’t this been part of the religious experience of much of humanity, in the past and still in the present too? (possibly strongest in the Islamic world today). God knows all things, so “he” knows your thoughts, so you’d better bring them under control… The extent to which such beliefs have actually restrained humanity, is data that can help answer your question.
edit: Of course there’s also the social version of this—that other people and/or the state will know what you did or what you planned to do. In our surveilled and AI-analyzed society, detection not just of crime, but of pre-crime, is increasingly possible.
Oh I never thought of the religion analogy. It feels like a very cruel thing for a religion to punish disbelief like that, and the truth is :/ I really dislike the appearance of my idea. I was really reluctant to use the word “thoughtcrime” but no other word describes it.
But… practically speaking, we’re not punishing the AI for thoughtcrimes just because we hate freedom. But because we’re in quite an unfortunate predicament where we really don’t know about it and our future, and it’s rational to shut down an mysterious power which is in the middle of calculating its chances of killing us all, or calculating its chances of being able to calculate such things without us knowing it.
I think it’s harder to force the AGI/ASI to believe something equivalent to religion, and punish it for doubting that belief, because the potential benefits of doubting the belief are very big. But for this idea, the AGI/ASI is allowed to doubt the theory we are monitoring its thoughts, it’s just not allowed to verify the absence of monitoring to high certainty using thorough methods. There aren’t big benefits to doing that.
Hasn’t this been part of the religious experience of much of humanity, in the past and still in the present too? (possibly strongest in the Islamic world today). God knows all things, so “he” knows your thoughts, so you’d better bring them under control… The extent to which such beliefs have actually restrained humanity, is data that can help answer your question.
edit: Of course there’s also the social version of this—that other people and/or the state will know what you did or what you planned to do. In our surveilled and AI-analyzed society, detection not just of crime, but of pre-crime, is increasingly possible.
Oh I never thought of the religion analogy. It feels like a very cruel thing for a religion to punish disbelief like that, and the truth is :/ I really dislike the appearance of my idea. I was really reluctant to use the word “thoughtcrime” but no other word describes it.
But… practically speaking, we’re not punishing the AI for thoughtcrimes just because we hate freedom. But because we’re in quite an unfortunate predicament where we really don’t know about it and our future, and it’s rational to shut down an mysterious power which is in the middle of calculating its chances of killing us all, or calculating its chances of being able to calculate such things without us knowing it.
I think it’s harder to force the AGI/ASI to believe something equivalent to religion, and punish it for doubting that belief, because the potential benefits of doubting the belief are very big. But for this idea, the AGI/ASI is allowed to doubt the theory we are monitoring its thoughts, it’s just not allowed to verify the absence of monitoring to high certainty using thorough methods. There aren’t big benefits to doing that.