I can’t help thinking that Pebblesorter CEV would have to include some aspect of sorting pebbles. Doesn’t that suggest that CEV can malfunction pretty badly?
Funny, I assumed that would mean it was working well...
I can’t help thinking that Pebblesorter CEV would have to include some aspect of sorting pebbles. Doesn’t that suggest that CEV can malfunction pretty badly?
Funny, I assumed that would mean it was working well...
I have a suspicion, based on a limited degree of personal experience, that the common philosophical practice of coming up with thought experiments, may tend to promote this sort of fallacious reasoning. Such “experiments” often artificially force people into exclusive “would you do X or Y?” dilemmas, and anyone who says “well, actually… why wouldn’t you do Z?” is promptly told that they’re missing the point. All of this is fair enough within the bounds of the thought experiment, but if people start seeing real life in the same simplified terms, then that’s something of a problem.
Byrnema, you talk extensively in this post about the LW community having a (dominant) ideology, without ever really explicitly stating what you think this ideology consists of.
I’d be interested to know what, from your perspective are the key aspects of this ideology. I think this would have two benefits:
the assumptions underlying our own ideologies aren’t always clear to us, and having them pointed out could be a useful learning experience; and
the assumptions underlying others’ ideology aren’t always clear to us, and making your impressions explicit would allow others the chance to clarify if necessary, and make sure we’re all on the same page.
(More generally, I think this is a great idea.)
Kieran Healy over at CrookedTImber presents evidence that, while opt-in vs. opt-out does make a difference to whether individuals agree to donate, this doesn’t necessarily translate into differences in actual organ procurement rates, and argues that the real bottlenecks in many countries are organizational/logistical.
The apparent lesson: Don’t assume that just by removing the obvious trivial obstacles, the problem will be solved. There may be less trivial obstacles lurking in the background.
P.S. Reading off the graphs, Austria, Belgium, France, Hungary, Italy, Norway, Poland, Portugal, Spain, Sweden, and Switzerland all appear to have presumed consent.
“If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results.”
Really? I think it’s pretty common to be (a) not particularly good at something, (b) aware you’re not particularly good at it, and (c) nonetheless not want that fact rubbed in your face if rubbing is avoidable. (Not saying this is necessarily a good thing, but I do think it’s pretty common.)
One who is not a slave is not necessarily a free man.
Indeed. One may be a woman. Or a turtle.
But individuals do fund science (at least in the US): individuals give pretty substantial amounts to universities, even if it’s less overall than what’s provided by governments.
As you sort of allude to later, the issue may be less that individuals don’t fund science, as that they don’t fund particular science. I would speculate that this is in no small part because we generally realize that we wouldn’t be very good at picking particular science to fund, so we give general, and let other people decide exactly what projects to pursue. This opens the process up to lots of problems, but it’s not obvious that it’s worse than the feasible alternatives.
(In the same way, lots of people choose to give to generalist charities like Oxfam, rather than trying to evaluate specific projects for themselves, though (a) I suspect it’s easier to tug people’s heartstrings for charitable projects; and (b) people probably overestimate their knowledge of what works in charity more than in science.)
Eliezer, I realise there’s still a way to go, but I just wanted to let you know that this is already much more useful than any conversion I’ve had about QM with anyone in the past. Thank you.
Eadwacer, I might be wrong, but I’d assumed both operations are always performed.
P.S. Jeremy: “atheistic children with an internalized sense of morality, an obvious contradiction”? Spare us, please. Why ruin an otherwise perfectly reasonable comment with such a patently ridiculous cheap shot?
I can see the appeal, but I worry that a metaphor where a single person is given a single piece of software, and has an option to rewrite it for their own and/or others’ purpose without grappling with myriad upstream and downstream dependencies, vested interests, and so forth is probably missing an important part of the dynamics of real world systems?
(This doesn’t really speak to moral obligations to systems, as much as practical challenges doing anything about them, but my experience is that the latter is a much more binding constraint.)
Silas, a suggestion which you can take or leave, as your prefer.
This comment makes some sound points, but IMHO, in an unnecessarily personal way. Note the consistent use of the critical “you”-based formulations (“you just decided”, “you come up with”, “you propose”, “you missed that”). Contrast this with Christian’s comment, which is also critical, but consistently focuses on the ideas, rather than the person presenting them.
I have no idea why you feel the need to throw about thinly-veiled accusations that Warrigal is basically an idiot. (How else could he or she possibly have missed all these really obvious problems you so insightfully spotted?). Maybe you don’t even intend them as such (though I’m baffled as to how could you possibly miss the overtones of your statements when they’re so freakin’ OBVIOUS). But the tendency to belittle others’ intellectual capacities (rather than just their views) is one that you’ve exhibited on a number of prior occasions as well, and one that I think you would do well to try to overcome—if only so that others will be more receptive to your ideas.
PS. For the avoidance of doubt, that final para was intended in part as an ironic illustration of the problem. I’m not that un-self-aware.
PPS. Also, I didn’t vote you down.
“They wanted to maximise their chances of pleasing the prof., not maximise their chances of understanding the world.”
I don’t know that I buy this. If the students make a guess that’s wrong, one would expect that to kickstart a process of the professor helping them to understand why it’s wrong. (Student: “Um… because of heat conduction?” Teacher: “OK, what does heat conduction suggest should happen in this situation?”...) This seems more likely to result in learning than just sitting there and saying “I don’t know”. If anything, I think it’s often a bigger problem from a learning perspective, when people are too afraid of being wrong to put out tentative ideas.
“I don’t know” is a rational response to this situation if you are sure enough of your understanding of all the potential principles involved that you know they can’t explain the phenomenon (and you don’t happen to guess that the professor is messing with you). But it’s fairly clear the students aren’t in that situation, so starting to generate hypotheses about what’s going on seems perfectly sensible. Of course, they should be actual hypotheses, and Eliezer’s perfectly right that “because of heat conduction”, if offered as an actual explanation, isn’t an hypothesis as much as a cop out. But if it’s a starting point, rather than an endpoint, then that seems perfectly reasonable.
In short, the problem isn’t that they’re guessing. It’s if their guesses aren’t actually saying anything, but they think that they are. (And I think Eliezer’s admonition to just say “I don’t know” conflates these two problems.)
don’t use willpower. Ever.
Could you do a post on that?
Yvain, I enjoy your posts, and generally find them useful, informative, and well written.
I also recognize that this view is controversial in some circles, but one thing that would make me enjoy them rather more is if you managed to ferret out the implicit assumption that crops up every now and then that your rationalist protagonists are necessarily male. (Or at least predominantly so, I haven’t been back to do an exhaustive stock-take of your gender specific pronoun usage, but I do recall being struck by this at least once before, so I figured it was worth a comment this time.)
Just to clarify, I don’t mean Theo here. If you want to use a specifically male example, that’s fine. But phrases like “the most important reason to argue with someone is to change his mind” and “[e]ither a person has enough of the rationalist virtues to overcome it, or he doesn’t” strike me as problematic.
I’m not for a moment suggesting that you’re being consciously sexist here. In fitting with the theme of this post, I spent a fair while rejecting others’ calls for gender neutral language under the mistaken (largely emotional) impression that agreeing with them would have be an admission of some deep moral flaw in me, rather than merely a small and relatively painless step towards inclusiveness—and ultimately better communication.
I’m genuinely puzzled by this sort of hostile reaction to what was really a pretty mild request for gender neutral language/examples. It seems utterly out of proportion to the original comment(s).
Clearly, any example one comes up with is probably capable of somehow excluding someone, and trying to screen off all possible objections seems unduly onerous given (a) it’s damn near impossible; and (b) the benefits of not excluding left-handed hermaphrodite axylotl enthusiasts are, all things considered, rather small.
But that’s not quite what we’re talking about. While women are certainly scarce on LW, in other parts of the world, they comprise roughly half the population. And using gender neutral language/examples is really easy—much easier than jumping through actual hoops, and probably also easier than writing comments telling people how annoyed you are about their nitpicking. The cost-benefit analysis here seems pretty straightforward.
So why does this seem to annoy (some) people so much?
Is the problem that you actually think it’s illegitimate for people to be bothered by stuff like this? Seriously? Wanting to be included is illegitimate? Wow. I guess it’s easy to think that things don’t matter when they don’t systematically affect you personally, but still.
FWIW, Charles Karelis makes this argument extensively in his book The Persistence of Poverty.
While it’s plausible that utility functions are sigmoidal, it’s not obviously true, and it’s certainly not true of many of the utility functions generally used in the literature.
Moreover, even if experienced-utility (e.g. emotional state) functions are sigmoidal, that doesn’t imply that decision-utility functions are, except in the special case that individuals are risk-neutral with respect to experienced utility. More generally than that, a consistent decision-utility function can be any positive monotonic transform of an experienced utility function.
EDIT: I should have added that the implication of that last point is that you can rationalize a lot of behavior just by assuming a particular level of risk preference. You can’t rationalize literally anything (consistency is still a constraint), but you can rationalize a lot. All of this makes it especially important to argue explicitly for the particular form of happiness/utility function you’re relying on.
(EDITED again to hopefully overcome ambiguities in the way different people are using the terms happiness and utility.)
michael webster,
You seem to have inverted the notation; not Eli.
(D,D) is the Nash equilibrium, not (C,C); and (D,D) is indeed Pareto dominated by (C,C), so this does seem to be a standard Prisoners’ Dilemma.
If inelegance is your primary concern, then “she” seems at least as good, and probably a lesser evil for other reasons.
As I pointed out before, if someone were more well-versed in evolutionary psychology and understood the root of such intuitions, they could give a better defense.
Sure, but that would still be a rationale generated after the fact, to justify a judgment not initially formed on the basis of those reasons. The point isn’t about whether we can come up with convincing reasons, post-hoc. It’s that, whether or not we end up finding them convincing, they’re still post-hoc. The fact that they don’t seem post-hoc internally is what allows us to maintain the illusion that our opinions were based on sound reasons all along.
This point has different implications depending on whether or not you already think moral realism is false (as Greene does). But it’s not intended (by Greene) as an argument that moral realism is false. (I feel like I’m repeating this point ad nauseam, but your claim that your spherical earth example “shows [gut instincts] can still have objective truth”, still seems to be based on the misapprehension that Greene is using this as an argument against objective moral truth. He’s not. He has separate arguments against that. His argument in this part assumes there is no objective moral truth.)
ETA:
At best, Greene’s thesis may be better off if he just scrapped the reference to the dilemma responses.
I don’t want to be a dick about this, but this strikes me as a strong claim, coming from someone who doesn’t seem to have bothered to read the whole thesis. I’m not sure that Greene should be held responsible for the fact that you don’t seem to get his point, if you haven’t actually read most of his argument.
Seriously, the overall point you’re making is a good one, but the way you’re making it is, IMO, incredibly unfair to Greene. Given that Roko has actually made the argument you seem to be criticizing, I don’t really understand why it’s Greene who’s getting the beat up.
In line with previous comments, I’d always understood the idea of emergence to have real content: “systems whose high-level behaviors arise or ‘emerge’ from the interaction of many low-level elements” as opposed to being centrally determined or consciously designed (basically “bottom-up” rather than “top-down”). It’s not a specific explanation in and of itself, but it does characterise a class of explanations, and, more importantly, excludes certain other types of explanation.
I would think that something like “life/intelligence is an emergent phenomenon” means “you don’t need intelligent design to explain life/intelligence”.