The error rate in replication experiments in the natural sciences is expected to be much much lower than in the social sciences. Humans and human environments are noisy and complicated. Look at nutrition/medicine—it’s taking us decades to figure out whether some substance/food is good or bad for you and under what circumstances. Why would you expect it be easier to analyze human psychology and behavior?
Lightwave
The trailer for the movie Transcendence is out.
to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency
As humans we can’t change/modify ourselves too much anyway, but what about if we’re able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there’s the argument that it leaves you open for dutch booking / blackmail.
Well whether it’s a “real” change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?
we value what we value, we don’t value what we don’t value, what more is there to say?
I’m confused what you mean by this. If there wasn’t anything more to say, then nobody would/should ever change what they value? But people’s values changes over time, and that’s a good thing. For example in medieval/ancient times people didn’t value animals’ lives and well-being (as much) as we do today. If a medieval person tells you “well we value what we value, I don’t value animals, what more is there to say?”, would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals’ well-being?
You are of course using some of your values to instruct other values. But they need to be at least consistent and it’s not really clear which are the “more-terminal” ones. It seems to me byrnema is saying that privileging your own consciousness/identity above others is just not warranted, and if we could, we really should self-modify to not care more about one particular instance, but rather about how much well-being/eudaimonia (for example) there is in the world in general. It seems like this change would make your value system more consistent and less arbitrary and I’m sympathetic to this view.
By the same logic eating you favorite food because it tastes good is also wireheading.
Better, instrumentally, to learn to handle the truth.
It really depends on your goals/goal system. I think the wiki definition is supposed to encompass possible non-human minds that may have some uncommon goals/drives, like a wireheaded clippy that produces virtual paperclips and doesn’t care whether they are in the real or virtual world, so it doesn’t want/need to distinguish between them.
You can use the “can be good at everything” definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it’s just using more technical terms to mean the same thing.
I looked through some of them, there’s a lot of theory and discussions, but I’m rather interested just in a basic step-by-step guide on what to do basically.
So I’m interested in taking up meditation, but I don’t know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?
“Regression to the mean” as used above is basically using a technical term to call someone stupid.
Well I definitely wasn’t implying that. I actually wanted to discuss the statistics.
Why? I couldn’t think of a way to make this comment without it sounding somewhat negative towards the OP, so I added this as a disclaimer, meaning that I want to discuss the statistics, not to insult the poster.
I look forward to reading your future posts.
I hate to sound negative, but I wouldn’t count on it.
They probably would have flown off had he twisted it faster.
Or maybe you do, but it’s not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity
I think the point is actually similar to this discussion, which also somewhat confuses me.
figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug
That’s MFAI’s job. Living on the “highest level” also has the same problem, you have to protect your region of the universe from anything that could “de-optimize” it, and FAI will (attempt to) make sure this doesn’t happen.
I, on the other hand, (suspect) I don’t mind being simulated and living in a virtual environment. So can I get my MFAI before attempts to build true FAI kill the rest of you?
Not really. You can focus your utility function on one particular optimization process and its potential future execution, which may be appropriate given that the utility function defines the preference over outcomes of that optimization process.
Well you could focus your utility function on anything you like anyway, the question is why, under utilitarianism, would it be justified to value this particular optimization process? If personal identity was fundamental, then you’d have no choice, conscious existence would be tied to some particular identity. But if it’s not fundamental, then why prefer this particular grouping of conscious-experience-moments, rather than any other? If I have the choice, I might as well choose some other set of these moments, because as you said, “why not”?
Well, shit. Now I feel bad, I liked your recent posts.
Going even further, some philosophers suggest that consciousness isn’t even continuous, e.g. as you refocus your attention, as you blink, there are gaps that we don’t notice. Just like how there are gaps in your vision when you move your eyes from one place to another, but to you it appears as a continuous experience.