Eliezer, You need to specify if it’s a one-time choice or if it will be repeated. You need to specify if lives or dollars are at stake. These things matter.
Larry_D'Anna
I will step up and claim that GLUTs are conscious. Why wouldn’t they be?
Phil: Gluts can certainly learn. A GLUT’s program is this:
while (true) { x = sensory input y, z = GLUT(y, x) muscle control output = z }
Everything a GLUT has learned is encoded into y. Human GLUTS are so big that even their indices are huge.
The more I think about it, the more I am convinced that if any GLUT could ever be made it would be an unspeakably horrible abomination. To explicitly represent the brain states of all the worst things that could happen to a person is a terrible thing. Weather the “internal state” variable is actually pointing at one doesn’t seem to make a big moral difference. GLUTs are torture. They are the worst form of torture I’ve ever heard of. I’m glad they’re almost certainly impossible.
Here’s what bugs me: Those two photons aren’t going to be exactly the same, in terms of say frequency or maybe the angle they make against the table. So how close do they have to be for the configurations to merge? Or is that a Wrong Question? Perhaps if we left the photon emmiter the same but changed the detector to one that could tell the difference in angle, then the experimental results would change? What if we use a angle-detecting photon detector but program it to dump the angle into /dev/null?
Amazingly great post. But I’m still confused on one point.
Say we want to set up the quantum configuration space for two 1-dimensional particles. So we have a position coordinate for each one, call them x and y. But wait, the two particles aren’t distinguishable, so we really need to look at the quotient space under the equivalence (x,y) ~ (y,x). But this is no longer a smooth manifold is it? At the moment I’m at a loss for a proof that it isn’t, but I certainly can’t find a smooth structure for it. And if it’s not smooth then what the heck do second derivatives of amplitude distributions mean?
And the greater the distance between blobs, the less likely it is that their amplitude flows will intersect each other and interfere with each other.
Can that be made more precise? Obviously it is true in a purely topological sense, because amplitude distributions evolve according to a differential equation. But that doesn’t tell us how far away the blobs have to be for us to start seeing the effect. Can we put a metric on configuration space, and then get a theorem that says if 99% of the amplitude of psi1 is d units away from 99% of the amplitude of psi2 then the joint distribution evolves approachability like psi1 and psi2 would evolve in isolation, with a maximum error of whatever%?
So where do the probabilities come from? If there’s “an” electron that we’ve calculated has 1⁄4 of it’s amplitude here, and 3⁄4 of it’s amplitude across the street, and we have detectors set up in both places, then after the electron has interacted with the detectors and I’ve read their outputs there should be two big blobs of amplitude. One blob with 1⁄4 of the amplitude that represents I-who-saw-the-electron-here, and one blob with 3⁄4 of it that represents I-who-saw the-electron-across-the-street. Why shouldn’t I bet $1 for $2 if the electron is here? What difference does the amplitude make? I’m either one blob or the other.
stephen: If we had a full understanding of fundamental physics then the only other a priori assumption we should need to derive the Born rule should be this: We aren’t special. Our circumstances are typical. In other words: it is possible that at a fundamental physical level there is no Born rule and no reason one should expect a Born rule. But just by some fantastic coincidence, our little branch has followed the born rule all this time. In fact, we should expect it to stop following the Born rule immediately, for the same reason someone who’s just won the lottery doesn’t expect to win again next time. It’s not physically impossible for us to be this lucky, but it’s not physically impossible for an egg to unscramble itself either.
Fundamental physics + eggs don’t unscramble + anthropic principle should give you the Born rule. If it doesn’t then physicists aren’t done yet.
“But my dear sir, if the fact of 2 + 3 = 5 exists somewhere outside your brain… then where is it?”
For some reason most mathematicians don’t seem to feel this sort of ontological angst about what math really means or what it means for a mathematical statement to be true. I can’t seem to articulate a single reason why this is, but let me say a few things that tend to wash away the angst.
it doesn’t matter “where it is”, it is a proven consequence of our axioms.
it is in every structure in the universe capable of representing integers and performing arithmetic on them.
there are many ways you can define the real numbers, but they’re all isomorphic. When making statements like “2 + 3 = 5” we don’t need to worry about which version of the reals we’re talking about; it’s true for all of them.
there’s a hierarchy of types of mathematical questions. At the bottom are recursive ones: questions we could answer with a big enough computer and enough time. Then there are R.E. questions: questions that if-the-answer-is-yes, we can confirm with a big enough computer and enough time (also, co-R.E., for if-the-answer-is-no). R.E. + co-R.E. is exactly the questions you can write in first-order logic (with the variables taking on integer values) with symbols for all recursive functions and only one quantifier. More quantifiers move you further up the hierarchy. Past that there are questions like the continuum hypothesis that aren’t even about numbers, and don’t seem to be constrained by anything physical. So even if you feel quite uneasy about what some mathematics means, remember that the stuff low on the hierarchy can be on solid ground even if the higher stuff isn’t.
Roko: You think you can convince a paperclip maximizer to value human life? Or do you think paperclip maximizers are impossible?
Caledonian: He isn’t using “too-big” in the way you are interpreting it.
The point is not: Mindspace has a size X, X > Y, and any set of minds of size > Y cannot admit universal arguments.
The point is: For any putative universal argument you can cook up, I can cook up a mind design that isn’t convinced by it.
The reason that we say it is too big is because there are subsets of Mindspace that do admit universally compelling arguments, such as (we hope) neurologically intact humans.
james andrix: we have to worry about what other Optimizers want, not just if they “think correctly”. Evolution still manages to routinely defeat us without being able to think at all.
Roko: What would it even mean for an objective value to be implicit in the structure of the universe? I’m having a hard time imagining any physical situation where that would even make sense. And even if it did, it would still be you that decides to follow that value. Surely if you discovered an objective value implicit in the structure of the universe that commanded you to torture kittens, you would ignore it.
steven: your “not 100% sure” is a perfect example of the problem eliezer is trying to explain. “not 100% sure that X is false” is not a valid excuse to waste thought on X if the prior improbability of X is as incredibly tiny as it is for thoughts like “paperclip maximizers will find their own paperclip-related reasons not to murder everyone”.
Tom McCabe: speaking as someone who morally disapproves of murder, I’d like to see the AI reprogram everyone back, or cryosuspend them all indefinitely, or upload them into a sub-matrix where they can think they’re happily murdering each other without all the actual murder. Of course your hypothetical murder-lovers would call this imoral, but I’m not about to start taking the moral arguments of murder-lovers seriously. You just have to come to grips with the fact that the thing we call Morality isn’t anything special from a global, physical perspective. It isn’t written in the stars, it doesn’t follow from pure logic, it isn’t simple or easy to describe. It’s a big messy, complicated aspect of our specific nature as a species.
Coming to grips with this fact doesn’t mean you have to turn into a moral relativist, or claim that morality is made of nothing but arbitrary individual preference. Those conclusions just don’t follow.
@Tom McCabe: “Beware shutting yourself into a self-justifying memetic loop. If you had been born in 1800, and just recently moved here via time travel, would you have refused to listen to all of our modern anti-slavery arguments, on the grounds that no moral argument by negro-lovers could be taken seriously?”
Generally I think this is a valid point. One shouldn’t lightly accuse a fellow human of being irredeemably morally broken, simply because they disagree with you on any particular conclusion. But in this particular case, I’m willing to take that step. If I know anything at all about morality, then I know murder is wrong.
@Alan Crossman, Roko: No, I do not think that the moral theory that Eliezer is arguing for is relativism. I am willing to say a paperclip maximizer is an abomination. It is a thing that should not be. Wouldn’t a relativist say that passing moral judgments on a thing as alien as that isn’t meaningful? Don’t we lack a common moral context by which to judge (according to the relativist)?
Let me attempt a summary of Eliezer’s theory:
Morality is real, but it is something that arose here, on this planet, among this species. It is nearly universal among humans and that is good enough. We shouldn’t expect it to be universal among all intelligent beings. Also it is not possible to concisely write down a definition for “should”, any more than it is possible to write a concise general AI program.
- 30 Jan 2011 4:59 UTC; 9 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
Roko: “And, of course, this lack of objectivity leads to problems, because different people will have their own notions of goodness.”
Don’t forget the psychological unity of mankind. Whatever is in our DNA that makes us care about morality at all is a complex adaptation, so it must be pretty much the same in all of us. That doesn’t mean everyone will agree about what is right in particular cases, because they have considered different moral arguments (or in some cases, confused mores with morals), but that-which-responds-to-moral-arguments is the same.
Richard: Abortion isn’t a moral debate. The only reason people disagree about it is because some of them don’t understand what souls are made of, and some of them do. Abortion is a factual debate about the nature of souls. If you know the facts, the moral conclusions are indisputable and obvious.
no, anonymous. The problem with communism is that it’s coercive and tyrannical. A super-duper welfare state is not the same as communism. Especially as productivity goes up. The difference being: under a welfare state you are taxed a portion of what you have, and some of that goes to the poor. Under communism you are essentially owned by the state. The state can tell you when to work, what to work on, and how many hours. The state tells you what you can or cannot buy, because the state decides what will or will not be produced.
Whatever you think about welfare states, communism is something else entirely.