2: Machine does not allow interaction with other real people. (Less-trivially fixable, but still very fixable. Networked MBLSes would do the trick, and/or ones with input devices to let outsiders communicate with folks who were in them.
How could you tell the difference? Let’s say I claim to have build a MBLS that doesn’t contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won’t rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won’t arise and that you will have plenty of fun. What would be missing?
Like in my response to Yasuo, I find it really weird to distinguish states that have no different experiences, that feel exactly the same.
Let’s consider another case: suppose my neurochemistry were altered so I just had a really high happiness set point [...] but had comparable emotional range to what I have now [...] so I could dip low when unpleasant things happened [...]
Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I’d assume that this isn’t an easy question to answer and I’m not calling you out on it, but “I want to be able to feel something bad” sounds positively deranged.
(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)
It does not seem like a transmuted orgasmium version of “me” would remember much [...]. Remembering things is not universally enjoyable, and anyway it’s rarely the most enjoyable thing I could be doing; this faculty would be replaced.
Yes, I would imagine orgasmium to essentially have no memory or only insofar as it’s necessary for survival and normal operations. Why does that matter? You already have a very unreliable and sparse memory. You wouldn’t lose anything great in orgasmium; it would always be present. I can only think of the intuition “the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone”. Orgasmium is always amazing.
But then, that can’t be exactly right, as you say you’d be more at ease to have memory you simply never use. I can’t understand this. If you don’t use it, how can it possibly affect your well-being, at any point? How can you value something that doesn’t have a causal connection to you?
I think in general this boils down to: I don’t want to lose capacities that I currently have.
How do you know that? I’m not trying to play the postmodernism card “How do we know anything?”, I’m genuinely curious how you arrived at this conclusion. If I try to answer the question “Do I care about losing capacities?”, I go through thought experiments and try to imagine scenarios that are only distinguished by the amount of capacities I have and then see what emotional reaction comes up. But then I’m still answering the question based on my (anticipated and real) rewards, so I’m really deciding what state I would enjoy more and pick the more enjoyable one (or less painful one). Wireheading, however, is always maximally enjoyable, so it seems I should always choose it.
(For completeness, I would normally agree with you that losing capacities is bad, but only because losing optimization power makes it harder to arrive at my goals. If I saw no need for more power, e.g. because I’m already maximally happy and there’s a system to ensure sustainability, I’d happily give up everything.)
(Finally, I really appreciate your detailed and charitable answer.)
How could you tell the difference? Let’s say I claim to have build a MBLS that doesn’t contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won’t rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won’t arise and that you will have plenty of fun. What would be missing?
I’d probably test such a thing for an hour, actually, and for all I know it would be so overwhelmingly awesome that I would choose to stay, but I expect that assuming my preferences and memories remained intact, I would rather be out among real people. My desire to be among real people is related to but not dependent on my tendency towards loneliness, and guilt hadn’t even occurred to me (I suppose I’d think I was being a bit of a jerk if I abandoned everybody without saying goodbye, but presumably I could explain what I was doing first?) I want to interact with, say, my sister, not just with an algorithm that pretends to be her and elicits similar feelings without actually having my sister on the other end.
Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I’d assume that this isn’t an easy question to answer and I’m not calling you out on it, but “I want to be able to feel something bad” sounds positively deranged.
In a sense, emotions can be accurate sort of like beliefs can. I would react similarly badly to the idea of having pleasant, inaccurate beliefs. It would be mistaken (given my preferences about the world) to feel equally happy when someone I care about has died (or something else bad) as when someone I care about gets married (or something else good).
(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)
Lying is wrong.
You already have a very unreliable and sparse memory.
I know. It is one of the many terrible things about reality. I hate it.
I can only think of the intuition “the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone”. Orgasmium is always amazing.
Memories are a way to access reality-tracking information. As I said, remembering stuff is not consistently pleasant, but that’s not what it’s about.
How can you value something that doesn’t have a causal connection to you?
Counterfactually.
How do you know that? I’m not trying to play the postmodernism card “How do we know anything?”, I’m genuinely curious how you arrived at this conclusion.
Well, I wrote everything above that in my comment, and then noticed that there was this pattern, and didn’t immediately come up with a counterexample to it.
I think it’s fine if you want to wirehead. I do not advocate interfering with your interest in doing so. But I still don’t want it.
Apologies for coming to the discussion very, very late, but I just ran across this.
If I saw no need for more power, e.g. because I’m already maximally happy and there’s a system to ensure sustainability, I’d happily give up everything.
How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?
(This isn’t the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven’t seen anyone else zero in on this particular point.)
How could you tell the difference? Let’s say I claim to have build a MBLS that doesn’t contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won’t rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won’t arise and that you will have plenty of fun. What would be missing?
Like in my response to Yasuo, I find it really weird to distinguish states that have no different experiences, that feel exactly the same.
Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I’d assume that this isn’t an easy question to answer and I’m not calling you out on it, but “I want to be able to feel something bad” sounds positively deranged.
(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)
Yes, I would imagine orgasmium to essentially have no memory or only insofar as it’s necessary for survival and normal operations. Why does that matter? You already have a very unreliable and sparse memory. You wouldn’t lose anything great in orgasmium; it would always be present. I can only think of the intuition “the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone”. Orgasmium is always amazing.
But then, that can’t be exactly right, as you say you’d be more at ease to have memory you simply never use. I can’t understand this. If you don’t use it, how can it possibly affect your well-being, at any point? How can you value something that doesn’t have a causal connection to you?
How do you know that? I’m not trying to play the postmodernism card “How do we know anything?”, I’m genuinely curious how you arrived at this conclusion. If I try to answer the question “Do I care about losing capacities?”, I go through thought experiments and try to imagine scenarios that are only distinguished by the amount of capacities I have and then see what emotional reaction comes up. But then I’m still answering the question based on my (anticipated and real) rewards, so I’m really deciding what state I would enjoy more and pick the more enjoyable one (or less painful one). Wireheading, however, is always maximally enjoyable, so it seems I should always choose it.
(For completeness, I would normally agree with you that losing capacities is bad, but only because losing optimization power makes it harder to arrive at my goals. If I saw no need for more power, e.g. because I’m already maximally happy and there’s a system to ensure sustainability, I’d happily give up everything.)
(Finally, I really appreciate your detailed and charitable answer.)
I’d probably test such a thing for an hour, actually, and for all I know it would be so overwhelmingly awesome that I would choose to stay, but I expect that assuming my preferences and memories remained intact, I would rather be out among real people. My desire to be among real people is related to but not dependent on my tendency towards loneliness, and guilt hadn’t even occurred to me (I suppose I’d think I was being a bit of a jerk if I abandoned everybody without saying goodbye, but presumably I could explain what I was doing first?) I want to interact with, say, my sister, not just with an algorithm that pretends to be her and elicits similar feelings without actually having my sister on the other end.
In a sense, emotions can be accurate sort of like beliefs can. I would react similarly badly to the idea of having pleasant, inaccurate beliefs. It would be mistaken (given my preferences about the world) to feel equally happy when someone I care about has died (or something else bad) as when someone I care about gets married (or something else good).
Lying is wrong.
I know. It is one of the many terrible things about reality. I hate it.
Memories are a way to access reality-tracking information. As I said, remembering stuff is not consistently pleasant, but that’s not what it’s about.
Counterfactually.
Well, I wrote everything above that in my comment, and then noticed that there was this pattern, and didn’t immediately come up with a counterexample to it.
I think it’s fine if you want to wirehead. I do not advocate interfering with your interest in doing so. But I still don’t want it.
Apologies for coming to the discussion very, very late, but I just ran across this.
How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?
(This isn’t the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven’t seen anyone else zero in on this particular point.)