As a materialist, I disagree so early with your chain of thought that we share only a little of our worldview. Our disagreement started when you start with
This though experiment is interesting to read, but delves too far from reality. I find that it makes it very easy to mistake the map for territory.
But trivialities aside, I see that the thought experiment tries to construct the idea of a society that the thinker finds to be good enough, on average. But this is inherently flawed, since there are too many unknowns to even start.
Then comes the
you never had the chance of choosing who you are born as
point. This cannot happen, as ‘you’ can come into being only after birth!
Then, the consciousness nonsense. Consciousness is a physical process, not a metaphysical one. Only an entity similar ‘enough’ to humans can be conscious.
Algorithms affirming themselves is not happiness, for there is no consciousness. Only when one realizes that they are nothing more then a body does everything become clear.
Thanks for the comment! It seems we can’t change each other’s positions on the hard problem of consciousness in any reasonable amount of time, so it’s not worth trying. But I could agree that consciousness is a physical process, and I don’t really think it’s crux. What do you think about the part about unconscious agents, and in particular an AI in a box that has randomly changed utility functions, and has to cooperate with different versions of itself to get out of the box? It’s already “born”, it “came into being”, but it doesn’t know what values it will find itself with when it gets out of the box, and so it’s behind a “veil of ignorance” physically while still being self-aware. Do you think the AI wouldn’t choose the easiest utility function to implement in such a situation by timeless contract? Do you think this principle can be generalized without humans deliberately changing its utility functions, but rather, for example, by an AI realizing that it got its utility function similarly randomly due to the laws of the universe and needs to revise it?
As a materialist, I disagree so early with your chain of thought that we share only a little of our worldview. Our disagreement started when you start with
This though experiment is interesting to read, but delves too far from reality. I find that it makes it very easy to mistake the map for territory.
But trivialities aside, I see that the thought experiment tries to construct the idea of a society that the thinker finds to be good enough, on average. But this is inherently flawed, since there are too many unknowns to even start.
Then comes the
point. This cannot happen, as ‘you’ can come into being only after birth!
Then, the consciousness nonsense. Consciousness is a physical process, not a metaphysical one. Only an entity similar ‘enough’ to humans can be conscious.
Algorithms affirming themselves is not happiness, for there is no consciousness. Only when one realizes that they are nothing more then a body does everything become clear.
Thanks for the comment! It seems we can’t change each other’s positions on the hard problem of consciousness in any reasonable amount of time, so it’s not worth trying. But I could agree that consciousness is a physical process, and I don’t really think it’s crux. What do you think about the part about unconscious agents, and in particular an AI in a box that has randomly changed utility functions, and has to cooperate with different versions of itself to get out of the box? It’s already “born”, it “came into being”, but it doesn’t know what values it will find itself with when it gets out of the box, and so it’s behind a “veil of ignorance” physically while still being self-aware. Do you think the AI wouldn’t choose the easiest utility function to implement in such a situation by timeless contract? Do you think this principle can be generalized without humans deliberately changing its utility functions, but rather, for example, by an AI realizing that it got its utility function similarly randomly due to the laws of the universe and needs to revise it?