Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
That’s one of the zombie’s weak points, anyway.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
Maybe the same principles that identify wirehead-type states as undesirable under our values, would, if completely and consistently applied, identify everything and anything possible as in the class of wirehead-type states.
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.
Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
That’s one of the zombie’s weak points, anyway.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.