According to the linked blog post, the benefit of the Cold Method is that the cannaboids are kept intact, whereas the benefit of the Hot Method is that the cannaboids are not kept intact (THCA converted to THC).
Quinn(Quinn Maurmann)
I also reject the claim that C and B are equivalent (unless the utility of survival is 0, +infinity, or -infinity). If I accepted their line of argument, then I would also have to answer the following set of questions with a single answer.
Question E: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove all 100 of the bullets?
Question F: Given that you’re playing Russian Roulette with a full 1-shooter, how much would you pay to remove the bullet?
Question G: With 99% certainty, you will be executed. With 1% certainty you will be forced to play Russian Roulette with a full 1-shooter. How much would you pay to remove the bullet?
Question H: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove one of the bullets?
After further reflection, I want to say that the problem is wrong (and several other commenters have said something similar): the premise that your money buys you no expected utility post mortem is generally incompatible with your survival having finite positive utility.
Your calculation is of course correct insofar as it stays within the scope of the problem. But note that it goes through exactly the same for my cases F and G. There you’ll end up paying iff X ≤ L, and thus you’ll pay the same amount to remove just 1 bullet from a full 100-shooter as to remove all 100 of them.
Actually my revised opinion, as expressed in my reply to Tyrell_McAllister, is that the authors’ analysis is correct given the highly unlikely set-up. In a more realistic scenario, I accept the equivalences A~B and C~D, but not B~C.
I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G—do the math out and you’ll see.
I really don’t know what you have in mind here. Do you also claim that cases A, B, C are equivalent to each other but not to D?
Signed up. Upon reflection, I believe the deadline is what let me get away with doing this right now at the expense of putting off studying for yet another hour. But it’s hard to say, because I decided pretty quickly that I was going to do it, and I only came up with that explanation after the fact.
Cached wisdom?
Anyway, I’d be more interested in hearing the regrets of those people who lived true to themselves, didn’t work too hard, let themselves be happier, etc. Do they wish they’d worked harder and “made something of themselves”? Been better at cooperating with the rest of society?
The predicate “is a real number” is absolute for transitive models of ZFC in the sense that if M and N are such models with M contained in N, then for every element x of M, the two models agree on whether x is a real number. But it can certainly happen than N has more real numbers than M; they just have to lie completely outside of M.
Example 1: If M is countable with respect to N, then obviously M doesn’t contain all of N’s reals.
Example 2 (perhaps more relevant to what you asked): Under mild large cardinal assumptions (existence of a measurable cardinal is sufficient), there exists a real number 0# (zero-sharp) which encodes the shortcomings of Gödel’s Constructible Universe L. In particular 0# lies outside of L, so L does not contain all the reals.
Thus if you started with L and insisted on adding a measurable cardinal on top, you would have to also add more reals as well.
Well, models can have the same reals by fiat. If I cut off an existing model below an inaccessible, I certainly haven’t changed the reals. Alternately I could restrict to the constructible closure of the reals L(R), which satisfies ZF but generally fails Choice (you don’t expect to have a well-ordering of the reals in this model).
I think, though, that Stuart_Armstrong’s statement
Often, different models of set theory will have the same model of the reals inside them
is mistaken, or at least misguided. Models of set theory and their corresponding sets of reals are extremely pliable, especially by the method of forcing (Cohen proved CH can consistently fail by just cramming tons of reals into an existing model without changing the ordinal values of that model’s Alephs), and I think it’s naive to hope for anything like One True Real Line.
Additionally, the link in the OP is wrong. I followed it in hopes that Luke would provide a citation where I could see these estimates.
Oh! Well I feel stupid indeed. I thought that all the text after the sidenote was a quotation from Luke (which I would find at the link in said sidenote), rather than a continuation of Mike Darwin’s statement. I don’t know why I didn’t even consider the latter.
I suspect it has to do with some LW users taking FAI seriously and dropping everything to join the cause, as suggested in this comment by cousin_it. In the following discussion, RichardKennaway specifically links to “Taking ideas seriously”.
I really, really dislike April Fool’s jokes like this. Somebody will stumble onto this post at a later date, read it quickly, and come away misinformed.
I’ll grant that the obviously horrible “Frodo Baggins” example should leave a bad taste in rationalists’ mouths, but a glance at the comments shows that several readers initially took the post seriously, even on April 1st.
Grognor, I don’t think it’s fair to insinuate that you may have learned a wrong lesson here. If it’s wrong (I actually doubt that it is), then it’s up to you to try to resist learning it.
As regards walking readers into a trap to teach them lessons, one of my all-time favorite LW posts does exactly this, but is very forthcoming about it. By contrast, I think thomblake overestimates the absurdity of the examples here: I thought they seemed plausible, and that “Frodo Baggins” was just poor reasoning. The comments show I’m not alone here. This level of subtlety may be appropriate on April 1st, but by April 3rd, it’s dated. I would recommend editing in a final line after the conclusion but before the references indicating that this post was an April Fool’s joke.
I am currently reading Kahneman’s book, and about 100 pages in I realized I was going to cache a lot more of the information if I started mapping out some of the dependencies between ideas in a directed graph. Example: I’ve got an edge from {Substitution} to {Affect heuristic}, labeled with the reminder “How do I feel about it? vs. What do I think about it?”. My goal is not to write down everything I want to remember, rather to (1) provide just enough to jog my memory when I consult this graph in the future, and (2) force me to think critically about what I’m reading when deciding whether or not add more nodes and edges.
But orthonormal, your example displays hindsight bias rather than confirmation bias!
I interpret billswift’s comment to mean:
GlaDOS, you should not just seek confirmation of the legitimacy of the text; you should also seek refutation.
(Or possibly it was meant the other way around?)
In any case, I agree that billswift’s comment is off-base, because GLaDOS’ comment does not actually show confirmation bias.
From Kahneman’s Thinking, Fast and Slow (p 325):
The probability of a rare event is most likely to be overestimated when the alternative is not fully specified… [Researcher Craig Fox] asked [participants] to estimate the probability that each of the eight participating teams would win the playoff; the victory of each team in turn was the focal event.
… The result: the probability judgments generated sucessively for the eight teams added up to 240%!
Do you (r_claypool) have reason to suspect that Christianity is much more likely to be true than other, (almost-) mutually exclusive supernatural worldviews like, say, Old Norse Paganism? If not, then 5% for Christianity is absurdly high.
Registered.
Since December, I’ve been persuing a “remedial computer science education”, for the sake of both well-roundedness and employability. My background is in the purest of pure math (Ph. Dropout from a well-ranked program), so I feel I can move fairly quickly here, though the territory is new.
My biggest milestone to date has been solving the first 100 Project Euler problems in Python (no omissions!). I had had a bit of Python experience before, and I picked 100 as the smallest number that sounded impressive (to me).
Second biggest milestone: following a course outline, I wrote an interpreter for a very limited subset of Scheme/Racket. This really helped de-mystify programming languages for me. (Although rather than learn OCaML like the course wanted, I just hacked it together in Python so that I could move on to a new project sooner.)
In the same vein, I’m currently reading and working through SICP, still using Racket. I’m in Chapter 3 of 5, though I’m often peeking ahead to Chapter 4 because it looks pretty exciting.
Of course, I won’t be a true LISP wizard without understanding macros, so the next (or concurrent) project is to go through the relevant Racket Docs tutorial.
I have some other likely future projects in mind, though I’m actually trying not to plan too far ahead lest it all appear more daunting.
Forcing myself through some C, to build character. This was explicitly recommended by a software engineer friend as a more “useful” way to spend my time than learning to LISP.
An algorithms course, possibly using this book
Your post confused me for a moment, because Robinson + Con(PA) is of course not weaker than PA. It proves Con(PA), and PA doesn’t.
I see now that your point is that Robinson arithmetic is sufficiently weak compared to PA that PA should not be weaker than Robinson + Con(PA). Is there an obvious proof of this?
(For example, if Robinson + Con(PA) proved all theorems of PA, would this contradict the fact that PA is not finitely axiomatizable?)
Actually p(just one 1) = 10/(2^10).
Nitpick: this is true if by “a three and a five” you mean (that the dice are labeled and) “die A comes up 3, and die B comes up 5″, but it’s false as written (and in games like Settlers, the identities of simultaneously thrown dice are not tracked).