Sure, the superintelligence thought experiment is not the fully story.
One problem with the suggestion of writing a rule to not alter human brains comes in specifying how the machine is not allowed to alter human brains. I’m skeptical about our ability to specify that rule in a way that does not lead to disastrous consequences. After all, our brains are being modified all the time by the environment, by causes that are on a wide spectrum of ‘direct’ and ‘indirect.’
Other problems with adding such a rule are given here.
Come on, this tiny detail isn’t worth the discussion. Classical solution to wireheading, asking the original and not the one under the influence, referring to you-at-certain-time and not just you-concept that resolves to something unpredicted at any given future time in any given possible world, rigid-designator-in-time.
Sure, the superintelligence thought experiment is not the fully story.
One problem with the suggestion of writing a rule to not alter human brains comes in specifying how the machine is not allowed to alter human brains. I’m skeptical about our ability to specify that rule in a way that does not lead to disastrous consequences. After all, our brains are being modified all the time by the environment, by causes that are on a wide spectrum of ‘direct’ and ‘indirect.’
Other problems with adding such a rule are given here.
(I meant that subjective experience that evaluates situations should be specified using unaltered brains, not that brains shouldn’t be altered.)
You’ve got my curiosity. What does this mean? How would you realize that process in the real world?
Come on, this tiny detail isn’t worth the discussion. Classical solution to wireheading, asking the original and not the one under the influence, referring to you-at-certain-time and not just you-concept that resolves to something unpredicted at any given future time in any given possible world, rigid-designator-in-time.