A Candid Optimist
pangloss
Given the problems for the principle of indifference, a lot of bayesians favor something more “subjective” with respect to the rules governing appropriate priors (especially in light of Aumann-style agreement theorems).
I’m not endorsing this manuever, merely mentioning it.
Apologies for the misunderstanding.
Often, when someone says, “Is it because A? or is the issue B?” they intend to be suggesting that the explanation is either A or B.
I realize this is not always the case, but I (apparently incorrectly) assumed that you were suggesting those as the possible explanations.
What makes it a crutch?
The Implications of Saunt Lora’s Assertion for Rationalists.
For those who are unfamiliar, Saunt Lora’s Assertion comes from the novel Anathem, and expresses the view that there are no genuinely new ideas; every idea has already been thought of.
A lot of purportedly new ideas can be seen as, at best, a slightly new spin on an old idea. The parallels between, Leibniz’s views on the nature of possibility and Arnauld’s objection, and David Lewis’s views on the nature of possibility and Kripke’s objection are but one striking example. If there is anything to the claim that we are, to some extent, stuck recycling old ideas, rather than genuinely/interestingly widening the range of views, it seems as though this should have some import for rationalists.
I should note, this explanation for why there is a disparity between how much we attend to the two issues does not make any assumptions about the degree to which we should be attending to either issue, which is a different question entirely.
That seems to be a false dichotomy. The first option implicitly condones disconcern for racial balance and implies that gender is not a social construct, the latter assumes that there is widespread sensitivity over the issue of racial balance.
More likely, issues of gender interaction are more salient for members of the community than issues of racial interaction, leading us to focus on the former and overlook the latter.
I suppose rather than just asking a rhetorical question, I should advocate for publicizing one’s plans. So:
It is far too easy to let oneself off the hook, and accept excuses from oneself that one would not want to offer to others. For instance, if one plans to work out three times a week, they might fail, and let themselves off the hook because they were relatively busy that week, even though they would not be willing to offer “It was a moderately busy week” as an excuse when another person asked why they didn’t exercise three times that week. On the other hand, the genuinely good excuses are the ones that we are willing to offer up. “I broke my leg”, “A family member fell ill”, etc. So, for whatever reason, the excuses we are willing to publicly rely on do a better job of tracking legitimate reasons to alter plans. Thus, whenever one is trying to effect a change in their lives, it seems good to rely on one’s own desire not to be embarrassed in front of their peers, as it will give them more motivation to stick to their plans. This motivation seems to be, if anything, heightened when the group is one that is specifically attending to whether you are making progress on the goal in question (for instance, if the project is about rationality, this community will be especially attuned to the progress of its members).
So, our rationality “to do” lists should be public (and, to echo something I imagine Robin Hanson would point out) so should our track-records at accomplishing the items on the list.
Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination)
What motivation is there to seek out an abomination?
Presumably the position mentioned is simply that one can value truth without valuing particular truths in the sense that you want them to be true. It might be true that an earthquake will kill hundreds, but I don’t love that an earthquake will kill hundreds.
The main danger for LW is that it could remain rationalist-porn for daydreamers.
I think this is a bit more accurate.
Why not determine publicly to fix it?
i agree. Have a karma based limit under a certain threshold, then, above that threshold, free reign.
I sense a bout of Deism coming on from our creator/sustainer.
I thought the point was to limit people’s ability to down vote. Wouldn’t that be a reason not to change the threshold?
You can also induce from what incentives you seem to respond to how to increase the probability that you will do B. For instance, if telling your friends that you plan to do a project has a high correlation with your doing that project, then you can increase your probability that you will do B by telling your friends that you plan to do B.
I think I may have been too brief/unclear, so I am going to try again:
The fallacy of sunk costs is, in some sense, to count the fact that you have already expended costs on a plan as a benefit of that plan. So, no matter how much it has already cost you to pursue project A, avoiding the fallacy means treating the decision about whether to continue pursuing A, or to pursue B (assuming both projects have equivalent benefits) as equivalent to the question of whether there are more costs remaining for A, or more costs remaining for B.
The closest to relevant thing induction tells us is how to convert our evidence into predictions about the remaining costs of the projects. This doesn’t conflict, because induction tells us only that, if projects like A tend to get a lot harder from the point you are at, that your current project is likely to get a lot harder from the point you are at.
There just isn’t a conflict there.
Am I wrong, or are you conflating disregarding past costs in evaluating costs and benefits with failing to remember past costs when making predictions about future costs and benefits?
It seems pretty clear that the sunk cost consideration is that past costs don’t count in terms of how much it now would cost you to pursue using vendor A vs. pursuing vendor B, while induction requires you to think, “every time we go with Vendor A, he messes up, so if we go with Vendor A, he will likely mess up again”.
What’s the conflict?
Edited to link to accessible image.
Not sure I disagree with your position, but I voted down because simply stating that your opponent is wrong doesn’t seem adequate.
I didn’t think that one had to. That is what your challenge to the theist sounded like. I think that religious language is coherent but false, just like phlogiston or caloric language.
Denying that the theist is even making an assertion, or that their language is coherent is a characteristic feature of positivism/verificationism, which is why I said that.
someone could start a thread, I guess.