Also I am not sure that you were getting my point. If in the future the choice to do away with consciousness is made; it will be made by future entities with much more information and clearer reasons for doing so. Without that future information and reasoning at our disposal; We can’t really criticize the decision. I can confidently say that my consciousness (based on what I know) does not want to be gotten rid of right now. If reasons overpoweringly convincing come along to change my mind then I will make that decision at that time with the best of information at the time.
My point was that the decision making process is up to the future self and is dependent on future information. The future self will not be making worse decisions. It will not make decisions that do not benefit itself (based on a version of your values right now that are slightly different..
does that make sense? Or should I try to explain it again...?
You’re definitely missing the point of the whole thing. Suppose that the optimal design for gaining knowledge is something like this (a vast supercomputer without the slightest bit of awareness or emotion.)
I think it is very unlikely- even in the worst case scenarios, I can’t imagine that superintelligence wouldn’t inherit some sort of value.
I don’t see the problem with that being the eventual case. Death of the state of the world as we know it yes; but the existence of a new entity. That’s the way the cookie crubles.
Yes; I don’t think I was getting your point.
Also I am not sure that you were getting my point. If in the future the choice to do away with consciousness is made; it will be made by future entities with much more information and clearer reasons for doing so. Without that future information and reasoning at our disposal; We can’t really criticize the decision. I can confidently say that my consciousness (based on what I know) does not want to be gotten rid of right now. If reasons overpoweringly convincing come along to change my mind then I will make that decision at that time with the best of information at the time.
My point was that the decision making process is up to the future self and is dependent on future information. The future self will not be making worse decisions. It will not make decisions that do not benefit itself (based on a version of your values right now that are slightly different..
does that make sense? Or should I try to explain it again...?
You’re definitely missing the point of the whole thing. Suppose that the optimal design for gaining knowledge is something like this (a vast supercomputer without the slightest bit of awareness or emotion.)
I think it is very unlikely- even in the worst case scenarios, I can’t imagine that superintelligence wouldn’t inherit some sort of value.
I don’t see the problem with that being the eventual case. Death of the state of the world as we know it yes; but the existence of a new entity. That’s the way the cookie crubles.