Thanks! (I didn’t see that, somehow.)
RichardChappell
It’s potentially misleading to talk about ‘implicit racism’. At least, we should take care to distinguish Implicit Bias vs. Implicit Malice.
Akrasia is the tendency to act against your own long-term interests
No, akrasia is acting against your better judgment. This comes apart from imprudence in both directions: (i) someone may be non-akratically imprudent, if they whole-heartedly endorse being biased towards the near; (ii) we may be akratic by failing to act according to other norms (besides prudence) that we reflectively endorse, e.g. morality.
Is there a better word for what he’s talking about?
Inter-temporal conflict?
(Part of the problem with misusing language is that it makes it unclear exactly what one has in mind. I assume Ainslie has a broader target than mere imprudence: foreseeable moral failures may provide similar reasons for precommitment, regret, etc. So perhaps he really does mean general akrasia, despite the misleading definition. But does he also take his topic to include ‘murder pills’ and ordinary cases of [foreseeable] changes to our ultimate values? Or does he restrict himself solely to cases of intertemporal “conflict” involving akrasia—i.e. whereby both ‘selves’ share the same ultimate values, and it’s simply a matter of helping them “follow through” on these?)
Okay, that sounds like ‘imprudence’, then.
And then—I hope—you would cooperate.
This is to value your own “rationality” over that which is to be protected: the billion lives at stake. (We may add: such a “rationality” fetish isn’t really rational at all.) Why give us even more to weep about?
(Negative points? Anyone care to explain?)
In case it’s this ambiguity, MBlume’s strategy isn’t “cooperate in any scenario”
Ah. It did look to me as though he was suggesting that. For, after describing how we would try to convince the creationist to cooperate (by trying to convince them of their epistemic error), he writes:
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart’s ignorance.
I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection. In that case, to suggest that we ought to co-operate nonetheless would seem futile in the extreme—hence my comment about merely adding to the reasons to weep.
But I take it your proposal is that MBlume meant something else: not that we would fail to convince the creationist to co-operate, but rather that we would fail to convince them to let us defect. That would make more sense. (But it is not at all clear from what he wrote.)
I don’t know of any unifying psychological theory that explains our problem with trivial inconveniences.
Ego-depletion? (Maybe not exactly right, but it seems to be in the ballpark at least...)
99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.
Might it make a difference that in scenario F, there is an actual process (namely, the coin toss) which could have given rise to the alternative outcome? Note the lack of any analogous mechanism for “bringing into existence” one out of all the possible worlds. One might maintain that this metaphysical disanalogy also makes an epistemic difference. (Compare cousin_it’s questioning of a uniform prior across possible worlds.)
In other words, it seems that one could consistently maintain that self-indication principles only hold with respect to possibilities that were “historically possible”, in the sense of being counterfactually dependent on some actual “chancy” event. Not all possible worlds are historically possible in this sense, so some further argument is required to yield the SIA in full generality.
(You may well be able to provide such an argument. I mean this comment more as an invitation than a criticism.)
No, I was suggesting that the difference is between F and SIA.
Thanks, that’s helpful. Though intuitively, it doesn’t seem so unreasonable to treat a credal state due to knowledge of chances differently from one that instead reflects total ignorance. (Even Bayesians want some way to distinguish these, right?)
I’m just talking about the difference between, e.g., knowing that a coin is fair, versus not having a clue about the properties of the coin and its propensity to produce various outcomes given minor permutations in initial conditions.
For one thing, it’ll change how we update. Suppose the coin lands heads ten times in a row. If we have independent knowledge that it’s fair, we’ll still assign 0.5 credence to the next toss. Otherwise, if we began in a state of pure ignorance, we might start to suspect that the coin is biased, and so have difference expectations.
Speakers Use Their Actual Language, so someone who uses ‘leg’ to mean leg or tail speaks truly when they say ‘dogs have five legs.’ But it remains the case that dogs have only four legs, and nobody can reasonably expect a ham sandwich to support hundreds of pounds of force. This is because the previous sentence uses English, not the counterfactual language we’ve been invited to imagine.
I discuss this a bit (along with some related issues, e.g. the repugnant conclusion) in my Value Holism paper. Feedback welcome!
The dispensing with niceness probably springs in large part from an extreme rejection of the ad hominem fallacy and of emotionally-based reasoning.
Another possibility is The Grim Aesthetic—an aversion to certain attitudes and practices that might be associated with cultures of deliberate “niceness”.
(Though even if those excesses are distasteful, that’s no reason not to be nice in the ordinary sense.)
I sketched a brief Overview of Dennett’s book a few years ago, if that’s of any interest to people here...
It’s worth stressing that he’s really just explaining the “soft problem” of consciousness, i.e. its informational aspect rather than its qualitative aspect. But he does have lots of interesting stuff to say about the former. (And of course lots of folks here will agree with Dennett that the third personal data of “heterophenomenology” are all that need to be explained. I’m just flagging that he doesn’t say anything that’ll satisfy people who don’t share this assumption.)
Alicorn—you should check out Gendler’s distinction between ‘alief’ and ‘belief’.
I’m not sure that epistemic ‘verifiability’ is really a helpful notion here, so I wouldn’t call this any kind of ‘positivism’. Better, I think, to define your thesis directly in terms of metaphysical reduction. For example, it seems a bit of a stretch when you write:
It’s a vivid heuristic, I guess, but it looks like the underlying idea you’re really getting at here is simply the conjunctive claim that (i) there is a privileged class of fundamental “base facts” that specify the contingent state of the universe, and (ii) any meaningful statement must supervene on (or be reducible to) said base facts.
I discuss this more in my old post, ‘Verification and Base Facts’.
One point worth noting is that, although most folks here happen to be physicalists, there’s no principled reason why a “soft positivist” couldn’t be a Chalmers-style property dualist, i.e. including phenomenal properties next to physical properties in the “base facts” to which all else reduces. After all, we can imagine our hypothetical observer “checking the logs” of the universe, and seeing—not only that chocolate cake briefly appeared in the center of the sun (being instantly consumed in a way that nobody inside the universe had any way to detect), but also that Eliezer became a phenomenal zombie for a day (in a way that nobody inside the universe had any way to detect).
Of course, you might have other reasons to reject property dualism—I don’t want to get into the zombie debate here. My point is simply that it seems compatible with the core reductionist idea behind Yvain’s so-called “soft positivism”. This demonstrates just how far this view is from old-fashioned positivism and its concerns about (intra-world) verifiability.
P.S. Html doesn’t work. What’s the comment markup code (blockquotes, hyperlinks, etc.) for this site?