Hard to explain. I’ll briefly go over my agreement/disagreement status on each point. MWI: Mixed opinion. MWI is a decent bet, but then again that’s a pretty standard opinion among quantum physicists. Eliezer’s insistence that MWI is obviously correct is not justified given his arguments: he doesn’t address the most credible alternatives to MWI, and doesn’t seem to be cognizant of much of the relevant work. I think I disagree in spirit here even though I sort of agree at face value. Cryonics: Disagree, nothing about cryonics is “obvious”. Meh science, Yay Bayes!: Mostly disagree, too vague, little supporting evidence for face value interpretation. I agree that Bayes is cool. Utilitarianism: Disagree, utilitarianism is retarded. Consequentialism is fine, but often very naively applied in practice, e.g. utilitarianism. Eliezer’s metaethics: Disagree, especially considering Eliezer’s said he thinks he’s solved meta-ethics, which is outright crazy, though hopefully he was exaggerating. “‘People are crazy, the world is mad’ is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature”: Mostly disagree, LW is much too confident in the heuristics and biases literature and it’s not nearly a sufficient explanation for lots of things that are commonly alleged to be irrational.
When making claims like this, you need to do something to distinguish yourself from most people who make such claims, who tend to harbor basic misunderstandings, such as an assumption that preference utilitarianism is the only utilitarianism.
Utilitarianism has a number of different features, and a helpful comment would spell out which of the features, specifically, is retarded. Is it retarded to attach value to people’s welfare? Is it retarded to quantify people’s welfare? Is it retarded to add people’s welfare linearly once quantified? Is it retarded to assume that the value of structures containing more than one person depends on no features other than the welfare of those persons? And so on.
I suppose it’s easiest for me to just make the blanket metaphilosophical claim that normative ethics without well-justified meta-ethics just aren’t a real contender for the position of actual morality. So I’m unsatisfied with all normative ethics. I just think that utilitarianism is an especially ugly hack. I dislike fake non-arbitrariness.
You went into the kitchen cupboard Got yourself another hour, and you gave Half of it to me We sat there looking at the faces Of the strangers in the pages ’til we knew ’em mathematically
They were in our minds Until forever But we didn’t mind We didn’t know better
So we made our own computer Out of macaroni pieces And it did our thinking While we lived our lives It counted up our feelings And divided them up even And it called our calculation Perfect love [lives?]
Didn’t even know That love was bigger Didn’t even know That love was so, so Hey hey hey
Hey this fire, this fire It’s burning us up Hey this fire,
It’s burning us Oh, oo oo oo, oo oo oo oo
So we made the hard decision And we each made an incision Past our muscles and our bones Saw our hearts were little stones
Pulled ’em out, they weren’t beating And we weren’t even bleeding As we laid them on the granite counter top
We beat ‘em up against each other We beat ‘em up against each other We struck ‘em hard against each other We struck ’em so hard, so hard until they sparked
Hey this fire, this fire It’s burning us up Hey this fire It’s burning us up Hey this fire
It’s burning us Oh, oo oo oo, oo oo oo oo Oo oo oo oo oo oo
Perhaps I show my ignorance. Pleasure-happiness and preference fulfillment are the only maximands I’ve seen suggested by utilitarians. A quick Google search hasn’t revealed any others. What are the alternatives?
I’m unfortunately too lazy to make my case for retardedness: I disagree with enough of its features and motivations that I don’t know where to begin, and I wouldn’t know where to end.
Eudaimonia. “Thousand-shardedness”. Whatever humans’ complex values decide constitutes an intrinsically good life for an individual.
It’s possible that I’ve been mistaken in claiming that, as a matter of standard definition, any maximization of linearly summed “welfare” or “happiness” counts as utilitarianism. But it seems like a more natural place to draw the boundary than “maximization of either linearly summed preference satisfaction or linearly summed pleasure indicators in the brain but not linearly summed eudaimonia”.
he doesn’t address the most credible alternatives to MWI
I don’t think you need to explicitly address the alternatives to MWI to decide in favor of MWI. You can simply note that all interpretations of quantum mechanics either 1) fail to specify which worlds exist, 2) specify which worlds exist but do so through a burdensomely detailed mechanism, 3) admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics. Am I missing something?
admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics.
If “all the worlds” includes the non classical worlds, MWI is observationally false. Whether and how decoherence produces classical worlds is a topic of ongoing research.
Is that a response to my point specifically or a general observation? I don’t think “simply noting” is nearly enough justification to decide strongly in favor of MWI—maybe it’s enough to decide in favor of MWI, but it’s not enough to justify confident MWI evangelism nor enough to make bold claims about the failures of science and so forth. You have to show that various specific popular interpretations fail tests 1 and 2.
ETA: Tapping out because I think this thread is too noisy.
I suppose? It’s hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome. But maybe you have something in mind?
It’s hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome.
It always seems that way until someone proposes a new theoretical framework, afterwards it seems like people were insane for not coming up with said framework sooner.
That would have been my guess. I don’t really understand the transactional interpretation; how does it pick out a single world without using a burdensomely detailed mechanism to do so?
Hard to explain. I’ll briefly go over my agreement/disagreement status on each point. MWI: Mixed opinion. MWI is a decent bet, but then again that’s a pretty standard opinion among quantum physicists. Eliezer’s insistence that MWI is obviously correct is not justified given his arguments: he doesn’t address the most credible alternatives to MWI, and doesn’t seem to be cognizant of much of the relevant work. I think I disagree in spirit here even though I sort of agree at face value. Cryonics: Disagree, nothing about cryonics is “obvious”. Meh science, Yay Bayes!: Mostly disagree, too vague, little supporting evidence for face value interpretation. I agree that Bayes is cool. Utilitarianism: Disagree, utilitarianism is retarded. Consequentialism is fine, but often very naively applied in practice, e.g. utilitarianism. Eliezer’s metaethics: Disagree, especially considering Eliezer’s said he thinks he’s solved meta-ethics, which is outright crazy, though hopefully he was exaggerating. “‘People are crazy, the world is mad’ is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature”: Mostly disagree, LW is much too confident in the heuristics and biases literature and it’s not nearly a sufficient explanation for lots of things that are commonly alleged to be irrational.
When making claims like this, you need to do something to distinguish yourself from most people who make such claims, who tend to harbor basic misunderstandings, such as an assumption that preference utilitarianism is the only utilitarianism.
Utilitarianism has a number of different features, and a helpful comment would spell out which of the features, specifically, is retarded. Is it retarded to attach value to people’s welfare? Is it retarded to quantify people’s welfare? Is it retarded to add people’s welfare linearly once quantified? Is it retarded to assume that the value of structures containing more than one person depends on no features other than the welfare of those persons? And so on.
I suppose it’s easiest for me to just make the blanket metaphilosophical claim that normative ethics without well-justified meta-ethics just aren’t a real contender for the position of actual morality. So I’m unsatisfied with all normative ethics. I just think that utilitarianism is an especially ugly hack. I dislike fake non-arbitrariness.
— Regina Spektor, The Calculation
Perhaps I show my ignorance. Pleasure-happiness and preference fulfillment are the only maximands I’ve seen suggested by utilitarians. A quick Google search hasn’t revealed any others. What are the alternatives?
I’m unfortunately too lazy to make my case for retardedness: I disagree with enough of its features and motivations that I don’t know where to begin, and I wouldn’t know where to end.
Eudaimonia. “Thousand-shardedness”. Whatever humans’ complex values decide constitutes an intrinsically good life for an individual.
It’s possible that I’ve been mistaken in claiming that, as a matter of standard definition, any maximization of linearly summed “welfare” or “happiness” counts as utilitarianism. But it seems like a more natural place to draw the boundary than “maximization of either linearly summed preference satisfaction or linearly summed pleasure indicators in the brain but not linearly summed eudaimonia”.
That sounds basically the same as was what I’d been thinking of as preference utilitarianism. Maybe I should actually read Hare.
What’s your general approach to utilitarianism’s myriad paradoxes and mathematical difficulties?
I don’t think you need to explicitly address the alternatives to MWI to decide in favor of MWI. You can simply note that all interpretations of quantum mechanics either 1) fail to specify which worlds exist, 2) specify which worlds exist but do so through a burdensomely detailed mechanism, 3) admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics. Am I missing something?
If “all the worlds” includes the non classical worlds, MWI is observationally false. Whether and how decoherence produces classical worlds is a topic of ongoing research.
Is that a response to my point specifically or a general observation? I don’t think “simply noting” is nearly enough justification to decide strongly in favor of MWI—maybe it’s enough to decide in favor of MWI, but it’s not enough to justify confident MWI evangelism nor enough to make bold claims about the failures of science and so forth. You have to show that various specific popular interpretations fail tests 1 and 2.
ETA: Tapping out because I think this thread is too noisy.
I suppose? It’s hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome. But maybe you have something in mind?
It always seems that way until someone proposes a new theoretical framework, afterwards it seems like people were insane for not coming up with said framework sooner.
Well the Transactional Interpretation for example.
That would have been my guess. I don’t really understand the transactional interpretation; how does it pick out a single world without using a burdensomely detailed mechanism to do so?