I am currently reading Kahneman’s book, and about 100 pages in I realized I was going to cache a lot more of the information if I started mapping out some of the dependencies between ideas in a directed graph. Example: I’ve got an edge from {Substitution} to {Affect heuristic}, labeled with the reminder “How do I feel about it? vs. What do I think about it?”. My goal is not to write down everything I want to remember, rather to (1) provide just enough to jog my memory when I consult this graph in the future, and (2) force me to think critically about what I’m reading when deciding whether or not add more nodes and edges.
Quinn
Grognor, I don’t think it’s fair to insinuate that you may have learned a wrong lesson here. If it’s wrong (I actually doubt that it is), then it’s up to you to try to resist learning it.
As regards walking readers into a trap to teach them lessons, one of my all-time favorite LW posts does exactly this, but is very forthcoming about it. By contrast, I think thomblake overestimates the absurdity of the examples here: I thought they seemed plausible, and that “Frodo Baggins” was just poor reasoning. The comments show I’m not alone here. This level of subtlety may be appropriate on April 1st, but by April 3rd, it’s dated. I would recommend editing in a final line after the conclusion but before the references indicating that this post was an April Fool’s joke.
I really, really dislike April Fool’s jokes like this. Somebody will stumble onto this post at a later date, read it quickly, and come away misinformed.
I’ll grant that the obviously horrible “Frodo Baggins” example should leave a bad taste in rationalists’ mouths, but a glance at the comments shows that several readers initially took the post seriously, even on April 1st.
I suspect it has to do with some LW users taking FAI seriously and dropping everything to join the cause, as suggested in this comment by cousin_it. In the following discussion, RichardKennaway specifically links to “Taking ideas seriously”.
Oh! Well I feel stupid indeed. I thought that all the text after the sidenote was a quotation from Luke (which I would find at the link in said sidenote), rather than a continuation of Mike Darwin’s statement. I don’t know why I didn’t even consider the latter.
Additionally, the link in the OP is wrong. I followed it in hopes that Luke would provide a citation where I could see these estimates.
Well, models can have the same reals by fiat. If I cut off an existing model below an inaccessible, I certainly haven’t changed the reals. Alternately I could restrict to the constructible closure of the reals L(R), which satisfies ZF but generally fails Choice (you don’t expect to have a well-ordering of the reals in this model).
I think, though, that Stuart_Armstrong’s statement
Often, different models of set theory will have the same model of the reals inside them
is mistaken, or at least misguided. Models of set theory and their corresponding sets of reals are extremely pliable, especially by the method of forcing (Cohen proved CH can consistently fail by just cramming tons of reals into an existing model without changing the ordinal values of that model’s Alephs), and I think it’s naive to hope for anything like One True Real Line.
The predicate “is a real number” is absolute for transitive models of ZFC in the sense that if M and N are such models with M contained in N, then for every element x of M, the two models agree on whether x is a real number. But it can certainly happen than N has more real numbers than M; they just have to lie completely outside of M.
Example 1: If M is countable with respect to N, then obviously M doesn’t contain all of N’s reals.
Example 2 (perhaps more relevant to what you asked): Under mild large cardinal assumptions (existence of a measurable cardinal is sufficient), there exists a real number 0# (zero-sharp) which encodes the shortcomings of Gödel’s Constructible Universe L. In particular 0# lies outside of L, so L does not contain all the reals.
Thus if you started with L and insisted on adding a measurable cardinal on top, you would have to also add more reals as well.
Cached wisdom?
Anyway, I’d be more interested in hearing the regrets of those people who lived true to themselves, didn’t work too hard, let themselves be happier, etc. Do they wish they’d worked harder and “made something of themselves”? Been better at cooperating with the rest of society?
Signed up. Upon reflection, I believe the deadline is what let me get away with doing this right now at the expense of putting off studying for yet another hour. But it’s hard to say, because I decided pretty quickly that I was going to do it, and I only came up with that explanation after the fact.
Actually my revised opinion, as expressed in my reply to Tyrell_McAllister, is that the authors’ analysis is correct given the highly unlikely set-up. In a more realistic scenario, I accept the equivalences A~B and C~D, but not B~C.
I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G—do the math out and you’ll see.
I really don’t know what you have in mind here. Do you also claim that cases A, B, C are equivalent to each other but not to D?
After further reflection, I want to say that the problem is wrong (and several other commenters have said something similar): the premise that your money buys you no expected utility post mortem is generally incompatible with your survival having finite positive utility.
Your calculation is of course correct insofar as it stays within the scope of the problem. But note that it goes through exactly the same for my cases F and G. There you’ll end up paying iff X ≤ L, and thus you’ll pay the same amount to remove just 1 bullet from a full 100-shooter as to remove all 100 of them.
I also reject the claim that C and B are equivalent (unless the utility of survival is 0, +infinity, or -infinity). If I accepted their line of argument, then I would also have to answer the following set of questions with a single answer.
Question E: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove all 100 of the bullets?
Question F: Given that you’re playing Russian Roulette with a full 1-shooter, how much would you pay to remove the bullet?
Question G: With 99% certainty, you will be executed. With 1% certainty you will be forced to play Russian Roulette with a full 1-shooter. How much would you pay to remove the bullet?
Question H: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove one of the bullets?
According to the linked blog post, the benefit of the Cold Method is that the cannaboids are kept intact, whereas the benefit of the Hot Method is that the cannaboids are not kept intact (THCA converted to THC).
p(just one 1) = 1/2^9; A whole heap more likely!
Actually p(just one 1) = 10/(2^10).
if you role two six sided dice you are just as likely to get two sixes as you are to get a three and a five.
Nitpick: this is true if by “a three and a five” you mean (that the dice are labeled and) “die A comes up 3, and die B comes up 5″, but it’s false as written (and in games like Settlers, the identities of simultaneously thrown dice are not tracked).
But orthonormal, your example displays hindsight bias rather than confirmation bias!
I interpret billswift’s comment to mean:
(Or possibly it was meant the other way around?)
In any case, I agree that billswift’s comment is off-base, because GLaDOS’ comment does not actually show confirmation bias.