The merely-lexical ambiguity is irrelevant of course. You responded to the top level post giving your reasons for not taking action re/ cryonics. So we’re just talking about whatever actually affects your behavior. I’m taking sides in your conflict, trying to talk to the part of you that wants to affect the world, against the part of you that wants to prevent you from trying to affect the world (by tricking your good-world-detectors).
>I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
Reflection and reasoning, we can agree these things are good. I’m not attacking reason, I’m trying to implement reason by asking about the reasoning that you took to go from your pre-theoretic good-thing-detector to your post-theoretic good-thing judgements. I’m pointing out that there seems, prima facie, to be a huge divergence between these two. Do you see the apparent huge divergence? There could be a huge divergence without there being a mistake, that’s sort of the point of reason, to reach conclusions you didn’t know already. It’s just that I don’t at all see the reasoning that led you there, and it still seems to have produced wrong conclusions. So my question is, what was the reasoning that brought you to the conclusion that, despite what your pre-theoretic good-thing-detectors are aimed at (play, life, etc.), actually what’s a good thing is happiness (contra life)? So far I don’t think you’ve described that reasoning, only stated that its result is that you value happiness. (Which is fine, I haven’t asked so explicitly, and maybe it’s hard to describe.)
The ‘reasoning’ is basically just teasing out implications, checking for contradictions, that sort of thing. The ‘reflection’ includes what could probably be described as a bunch of appeals to intuition. I don’t think I can explain or justify those in a particularly interesting or useful way; but I will restate that I can only assume you’re doing the same thing at some point.
How, in broad strokes, does one tease out the implication that one cares mainly about happiness and suffering, from the pre-theoretic caring about kids, life, play, etc.?
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
>to some degree our disagreement here is semantic
The merely-lexical ambiguity is irrelevant of course. You responded to the top level post giving your reasons for not taking action re/ cryonics. So we’re just talking about whatever actually affects your behavior. I’m taking sides in your conflict, trying to talk to the part of you that wants to affect the world, against the part of you that wants to prevent you from trying to affect the world (by tricking your good-world-detectors).
>I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
Reflection and reasoning, we can agree these things are good. I’m not attacking reason, I’m trying to implement reason by asking about the reasoning that you took to go from your pre-theoretic good-thing-detector to your post-theoretic good-thing judgements. I’m pointing out that there seems, prima facie, to be a huge divergence between these two. Do you see the apparent huge divergence? There could be a huge divergence without there being a mistake, that’s sort of the point of reason, to reach conclusions you didn’t know already. It’s just that I don’t at all see the reasoning that led you there, and it still seems to have produced wrong conclusions. So my question is, what was the reasoning that brought you to the conclusion that, despite what your pre-theoretic good-thing-detectors are aimed at (play, life, etc.), actually what’s a good thing is happiness (contra life)? So far I don’t think you’ve described that reasoning, only stated that its result is that you value happiness. (Which is fine, I haven’t asked so explicitly, and maybe it’s hard to describe.)
The ‘reasoning’ is basically just teasing out implications, checking for contradictions, that sort of thing. The ‘reflection’ includes what could probably be described as a bunch of appeals to intuition. I don’t think I can explain or justify those in a particularly interesting or useful way; but I will restate that I can only assume you’re doing the same thing at some point.
How, in broad strokes, does one tease out the implication that one cares mainly about happiness and suffering, from the pre-theoretic caring about kids, life, play, etc.?
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
>I think I’ll bow out of the discussion now
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
Cheers. I won’t plug you into the experience machine if you don’t sign me up for cryonics :)
Deal! I’m glad we can realize gains from trade across metaphysical chasms.