I just took the survey, making this my first post that someone will read!
Raoul589
Even if I could have selected the links I wouldn’t have tried it, because you just know that clicking on something like that will open a new page and delete all of your entered data.
‘I don’t want that’ doesn’t imply ‘we don’t want that’. In fact, if the ‘we’ refers to humanity as a whole, then denisbider’s position refutes the claim by definition.
What evidence is there that we should value anything more than what mental states feel like from the inside? That’s what the wirehead would ask. He doesn’t care about goals. Let’s see some evidence that our goals matter.
We disagree if you intended to make the claim that ‘our goals’ are the bedrock on which we should base the notion of ‘ought’, since we can take the moral skepticism a step further, and ask: what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
A further point of clarification: It doesn’t follow—by definition, as you say—that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that ‘what is valuable is what we value’ tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.
Note: I took the statement ‘what is valuable is what we value’ to be equivalent to ‘things are valuable because we value them’. The statement has another possbile meaning: ‘we value things because they are valuable’. I think both are incorrect for the same reason.
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!‘. What’s more, the only way you even know this much, is from how you feel about exploration—on the inside—when you are considering it or engaging in it. That is, how much ‘pleasure’ or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: ‘so, what do you think about making scientific discoveries?’ and it says right back to you: ‘making discoveries? Yay!’
Since literally every single thing we value just boils down to ‘my brain says yay about this’ anyway, why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
It seems, then, that anti-wireheading boils down to the claim that ‘wireheading, boo!’.
This is not a convincing argument to people whose brains don’t say to them ‘wireheading, boo!’. My impression was that denisbider’s top level post was a call for an anti-wireheading argument more convincing than this.
You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave’s speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good—since in his scenario it is clearly net better to wirehead than not to.
Does it follow from that that you could consider taking the perspective of your post wirehead self?
In this way, defection seems to have two social meanings:
Defecting proactively is betrayal. Defecting reactively is punishment.
We seem to have strong negative opinions of the former and somewhat positive opinions of the latter. I think in your salesman example you’re talking about punishment being crucial. In fact, the defection of the customer is only necessary as a response to the salesman’s original defection.
I am curious as to whether you have a similarly real life example of where proactive defection (i.e. betrayal) is crucial (for some societal or group benefit)?
I have a related question about buying stocks. Suppose (for example) that I knew with 100% certainty that the global demand for home robotics would grow tenfold in the next decade.
If this was the only information that I had that wasn’t generally known, is there any action I could take based on this information to reliably make money from the stock market (at least over the next ten years)?
Right. Is there no more sophisticated strategy though?
If I was keeping my porfolio indexed to the market, wouldn’t I be selling Blockbuster shares each month as Blockbuster lost market share? Why would I end up holding lots of Blockbuster?
Suppose that you are literally certain (you’re not just 100% confident, you actually have special perfect information) about the future tenfold growth in demand for home robotics. Are you claiming that there is literally no way of using this information to reliably extract money from the stock market? This surprises me.
Would you expect Vaniver’s indexing to at least reliably turn a profit? Would you expect it to turn a large profit?
In case it’s not clear: I’m not trying to contradict you; I am trying to get advice from you.
Suppose that you got a mysterious note from the future telling you that the demand for home-robotics will increase tenfold in the next decade, and you know this note to be totally reliable. You know nothing else that is not publicly known. What would you do next?
Dammit, I wanted to hear the anecdote.
Last year, I had to choose what I would research in my honours year of my Computer Science degree. I actually remember thinking to myself, ‘I’m going to use all of the techniques I have learned from LW’. I sat down for several hours, carefully analysing my situation, and came to the conclusion, I should research A. It is the superior option on every non-trivial metric I can think of. This is the rational decision.
But then, I chose to research B, because I would have been embarrassed to have to explain my choice of A to my family. And that was it.
It’s kind of like mini-cryonics!
For what it’s worth, I’m probably going to be in Auckland early next year, and I would come to the meetup.