You missed my point 3 times out of 3. Wait, I’ll put down the flyswatter and pick up this hammer...:
Excluding certain persons from CEV creates issues that CEV was intended to resolve in the first place. The mechanic you suggest—excluding persons that YOU deem to be unfit—might look attractive to you, but it will not be universally acceptable.
Note that “our coherent extrapolated volition is our wish if we knew more, were smarter...” etc . The EVs of yourself and that suicidal fanatic should be pretty well aligned—you both probably value freedom, justice, friendship, security and like good food, sex and World of Warcraft(1)… you just don’t know why he believes that suicidal fanaticism is the right way under his circumstances, and he is, perhaps, not smart enough to see other options to strive for his values.
Can I also ask you to re-read CEV, paying particular attention to Q4 and Q8 in the PAQ section? They deal with the instinctive discomfort of including everyone in the CEV.
(1) that was a backhand with the flyswatter, which I grabbed with my left hand just then.
Note that “our coherent extrapolated volition is our wish if we knew more, were smarter...” etc . The EVs of yourself and that suicidal fanatic should be pretty well aligned—you both probably value freedom
No. I will NOT assume that extrapolating the volition of people with vastly different preferences to me will magically make them compatible with mine. The universe is just not that convenient. Pretending it is while implementing a FAI is suicidally naive.
Can I also ask you to re-read CEV, paying particular attention to Q4 and Q8 in the PAQ section? They deal with the instinctive discomfort of including everyone in the CEV.
I’m familiar with the document, as well as approximately everything else said on the subject here, even in passing. This includes Eliezer propozing ad-hoc work arounds to the “What if people are jerks?” problem.
Quite right, don’t assume. Think it through. Then you may be less inclined to pepper your posts with non-sequiturs like “magically”, “pretending” and “naive”.
I’m familiar with the document, as well as approximately everything else said on the subject here, even in passing.
Great! But, IMHO, you have a tendency to miss the point. So:
Can I also ask you to re-read CEV, paying particular attention to Q4 and Q8 in the PAQ section? They deal with the instinctive discomfort of including everyone in the CEV.
What do you mean? As an analogy, .01% sure and 99.99% sure are both states of uncertainty. EVs are exactly the same or they aren’t. If someone’s unmuddled EV is different than mine—and it will be—I am better off with mine influencing the future alone rather than the future being influenced by both of us, unless my EV sufficiently values that person’s participation.
My current EV places some non-infinite value on each person’s participation. You can assume for the sake of argument each person’s EV would more greatly value this.
You can correctly assume that for each person, all else equal, I’d rather have them than not, (though not necessarily at the cost of having the universe diverted from my wishes) but I don’t really see why the death of most of the single ring species that is everything alive today makes selecting humans alone for CEV the right thing to do in a way that avoids the problem of excluding the disenfranchised whom the creators don’t care sufficiently about.
If enough humans value what other humans want, and more so when extrapolated, it’s an interlocking enough network to scoop up all humans but the biologist who spends all day with chimpanzees (dolphins, octopuses, dogs, whatever) is going to be a bit disappointed by the first-order exclusion of his or her friends from consideration.
I mean, once they both take pains to understand each other’s situation and have a good, long think about it, they would find that they will agree on the big issues and be able to easily accommodate their differences. I even suspect that overall they would value the fact that certain differences exist.
EVs can, of course, be exactly the same, or differ to some degree. But—provided we restrict ourselves to humans—the basic human needs and wants are really quite consistent across an overwhelming majority. There is enough material (on the web and in print) to support this.
Wedrifid (IMO) is making a mistake of confusing some situation dependent subgoals (like say “obliterate Israel” or “my way or the highway”) with high level goals.
I have not thought about extending CEV beyond human species, apart from taking into account the wishes of your example biologists etc. I suspect it would not work, because extrapolating wishes of “simpler” creatures would be impossible. See http://xkcd.com/605/.
Wedrifid (IMO) is making a mistake of confusing some situation dependent subgoals (like say “obliterate Israel” or “my way or the highway”) with high level goals.
You are mistaken. That I entertain no such confusion should be overwhelmingly clear from reading nearby comments.
I have not thought about extending CEV beyond human species, apart from taking into account the wishes of your example biologists etc. I suspect it would not work, because extrapolating wishes of “simpler” creatures would be impossible.
That sounds awfully convenient. If there really is a threshold of how “non-simple” a lifeform has to be to have coherently extrapolatable volitions, do you have any particular evidence that humans clear that threshold and, say, dolphins don’t?
For my part, I suspect strongly that any technique that arrives reliably at anything that even remotely approximates CEV for a human can also be used reliably on many other species. I can’t imagine what that technique would be, though.
(Just for clarity: that’s not to say one has to take other species’ volition into account, any more than one has to take other individuals’ volition into account.)
The lack of threshold is exactly the issue. If you include dolphins and chimpanzees, explicitly, you’d be in a position to apply the same reasoning to include parrots and dogs, then rodents and octopi, etc, etc.
Eventually you’ll slide far enough down this slippery slope to reach caterpillars and parasitic wasps. Now, what would a wasp want to do, if it understood how its acts affect the other creatures worthy of inclusion in the CEV?
This is what I see as the difficulty in extrapolating the wishes of simpler creatures. Perhaps in fact there is a coherent solution, but having only thought about this a little, I suspect there might not be one.
lack of threshold...then rodents...parasitic wasps
We don’t have to care. If everyone or nearly all were convinced that something less than 20 pounds had no moral value, or a person less than 40 days old, or whatever, that would be that.
Also, as some infinite sums have finite limits, I do not think that small things necessarily make summing humans’ or the Earth’s morality impossible.
Ah, OK. Sure, if your concern is that, if we extrapolated the volition of such creatures, we would find that they don’t cohere, I’m with you. I have similar concerns about humans, actually.
I’d thought you were saying that we’d be unable to extrapolate it in the first place, which is a different problem.
Can I also ask you to re-read CEV, paying particular attention to Q4 and Q8 in the PAQ section?
Just, uh… just making sure: you do know that wedrifid has more fourteen thousand karma for a reason, right? It’s actually not solely because he’s an oldtimer, he can be counted on to have thought about this stuff pretty thoroughly.
Edit: I’m not saying “defer to him because he has high status”, I’m saying “this is strong evidence that he is not an idiot.”
I admit to being a little embarrassed as I wrote that paragraph, because this sort of thing can come across as “fuck you”. Not my intent at all, just that the reference is relevant, well written, supports my point—and is too long to quote.
Having said that, your comment is pretty stupid. Yes, he has heaps more karma here—so what? I have more karma here than R. Dawkins and B. Obama combined!
The “so what” is, he’s already read it. Also, he’s, you know, smart. A bit abrasive (or more than a bit), but still. He’s not going to go “You know, you’re right! I never thought about it that way, what a fool I’ve been!”
A bit of an ethical egoist (or more than a bit), but still.
I suppose “ethical egoism” fits. But only in some completely subverted “inclusive ethical egoist” sense in which my own “self interest” already takes into account all my altruistic moral and ethical values. ie. I’m basically not an ethical egoist at all. I just put my ethics inside the utility function where they belong.
You missed my point 3 times out of 3. Wait, I’ll put down the flyswatter and pick up this hammer...:
Excluding certain persons from CEV creates issues that CEV was intended to resolve in the first place. The mechanic you suggest—excluding persons that YOU deem to be unfit—might look attractive to you, but it will not be universally acceptable.
Note that “our coherent extrapolated volition is our wish if we knew more, were smarter...” etc . The EVs of yourself and that suicidal fanatic should be pretty well aligned—you both probably value freedom, justice, friendship, security and like good food, sex and World of Warcraft(1)… you just don’t know why he believes that suicidal fanaticism is the right way under his circumstances, and he is, perhaps, not smart enough to see other options to strive for his values.
Can I also ask you to re-read CEV, paying particular attention to Q4 and Q8 in the PAQ section? They deal with the instinctive discomfort of including everyone in the CEV.
(1) that was a backhand with the flyswatter, which I grabbed with my left hand just then.
No. I will NOT assume that extrapolating the volition of people with vastly different preferences to me will magically make them compatible with mine. The universe is just not that convenient. Pretending it is while implementing a FAI is suicidally naive.
I’m familiar with the document, as well as approximately everything else said on the subject here, even in passing. This includes Eliezer propozing ad-hoc work arounds to the “What if people are jerks?” problem.
Quite right, don’t assume. Think it through. Then you may be less inclined to pepper your posts with non-sequiturs like “magically”, “pretending” and “naive”.
Great! But, IMHO, you have a tendency to miss the point. So:
What do you mean? As an analogy, .01% sure and 99.99% sure are both states of uncertainty. EVs are exactly the same or they aren’t. If someone’s unmuddled EV is different than mine—and it will be—I am better off with mine influencing the future alone rather than the future being influenced by both of us, unless my EV sufficiently values that person’s participation.
My current EV places some non-infinite value on each person’s participation. You can assume for the sake of argument each person’s EV would more greatly value this.
You can correctly assume that for each person, all else equal, I’d rather have them than not, (though not necessarily at the cost of having the universe diverted from my wishes) but I don’t really see why the death of most of the single ring species that is everything alive today makes selecting humans alone for CEV the right thing to do in a way that avoids the problem of excluding the disenfranchised whom the creators don’t care sufficiently about.
If enough humans value what other humans want, and more so when extrapolated, it’s an interlocking enough network to scoop up all humans but the biologist who spends all day with chimpanzees (dolphins, octopuses, dogs, whatever) is going to be a bit disappointed by the first-order exclusion of his or her friends from consideration.
I mean, once they both take pains to understand each other’s situation and have a good, long think about it, they would find that they will agree on the big issues and be able to easily accommodate their differences. I even suspect that overall they would value the fact that certain differences exist.
EVs can, of course, be exactly the same, or differ to some degree. But—provided we restrict ourselves to humans—the basic human needs and wants are really quite consistent across an overwhelming majority. There is enough material (on the web and in print) to support this.
Wedrifid (IMO) is making a mistake of confusing some situation dependent subgoals (like say “obliterate Israel” or “my way or the highway”) with high level goals.
I have not thought about extending CEV beyond human species, apart from taking into account the wishes of your example biologists etc. I suspect it would not work, because extrapolating wishes of “simpler” creatures would be impossible. See http://xkcd.com/605/.
You are mistaken. That I entertain no such confusion should be overwhelmingly clear from reading nearby comments.
That sounds awfully convenient. If there really is a threshold of how “non-simple” a lifeform has to be to have coherently extrapolatable volitions, do you have any particular evidence that humans clear that threshold and, say, dolphins don’t?
For my part, I suspect strongly that any technique that arrives reliably at anything that even remotely approximates CEV for a human can also be used reliably on many other species. I can’t imagine what that technique would be, though.
(Just for clarity: that’s not to say one has to take other species’ volition into account, any more than one has to take other individuals’ volition into account.)
The lack of threshold is exactly the issue. If you include dolphins and chimpanzees, explicitly, you’d be in a position to apply the same reasoning to include parrots and dogs, then rodents and octopi, etc, etc.
Eventually you’ll slide far enough down this slippery slope to reach caterpillars and parasitic wasps. Now, what would a wasp want to do, if it understood how its acts affect the other creatures worthy of inclusion in the CEV?
This is what I see as the difficulty in extrapolating the wishes of simpler creatures. Perhaps in fact there is a coherent solution, but having only thought about this a little, I suspect there might not be one.
We don’t have to care. If everyone or nearly all were convinced that something less than 20 pounds had no moral value, or a person less than 40 days old, or whatever, that would be that.
Also, as some infinite sums have finite limits, I do not think that small things necessarily make summing humans’ or the Earth’s morality impossible.
Ah, OK. Sure, if your concern is that, if we extrapolated the volition of such creatures, we would find that they don’t cohere, I’m with you. I have similar concerns about humans, actually.
I’d thought you were saying that we’d be unable to extrapolate it in the first place, which is a different problem.
Just, uh… just making sure: you do know that wedrifid has more fourteen thousand karma for a reason, right? It’s actually not solely because he’s an oldtimer, he can be counted on to have thought about this stuff pretty thoroughly.
Edit: I’m not saying “defer to him because he has high status”, I’m saying “this is strong evidence that he is not an idiot.”
I admit to being a little embarrassed as I wrote that paragraph, because this sort of thing can come across as “fuck you”. Not my intent at all, just that the reference is relevant, well written, supports my point—and is too long to quote.
Having said that, your comment is pretty stupid. Yes, he has heaps more karma here—so what? I have more karma here than R. Dawkins and B. Obama combined!
(I prefer “Godspeed!”)
The “so what” is, he’s already read it. Also, he’s, you know, smart. A bit abrasive (or more than a bit), but still. He’s not going to go “You know, you’re right! I never thought about it that way, what a fool I’ve been!”
Edit: Discussed here.
I suppose “ethical egoism” fits. But only in some completely subverted “inclusive ethical egoist” sense in which my own “self interest” already takes into account all my altruistic moral and ethical values. ie. I’m basically not an ethical egoist at all. I just put my ethics inside the utility function where they belong.
Duly noted! (I apologize for misconstruing you, also.)