All of these sound like a posteriori justifications than a priori predictions. Good ones. But still.
MoreOn
If people here are wrong, but you care enough to read, you owe it to them and to yourself to examine their arguments critically.
What’s the alternative? Say I can’t solve something, like I couldn’t solve 3n+1 (aka Collatz conjecture).
1) Accept that this is something I can’t solve, and give up? Should I live with the frustration of an open question, rather than take comfort in deferring to a semantic stopsign?
2) Try to figure it out on the off chance that I can do better than the 6.5 billion living people plus the scholars of the past? (Almost drove myself mad with 3n+1). There’re other problems that I want to tackle, ones I CAN possibly solve, ones with greater applicability to real life… should I work on this one instead?
3) Should I force myself not to care? (Really difficult; my mind keeps coming back to 3n+1 no matter how counter-productive that is for all other areas of my life).
To be clear: I’m not arguing against. I’m asking to clarify. I find myself thoroughly confused by this article.
How is a higher probability for meltdown NOT a “point against” the reactor—and how is less waste NOT a “point for?” I think I’m missing some underlying principle here.
If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower.
Wait. WHAT? How does that even make sense?
I suppose if you gave me a long boring lecture about reactors, and then quizzed me on it before I remembered the facts (with my house cat memory), I would could get this wrong for the exact reasons you described, without being irrational.
Suppose there’s a multiple choice question, “How much waste does reactor 1 produce?” and I know that reactor 1 is the best across most categories (has the most points in its favor), and I know that all reactors produce between 10 and 15 units of waste, then my answer would be (b) below:
(a) 8 units
(b) 10 units
(c) 12 units
(d) 14 units
And of course, there’s every possibility that “reactor 1” didn’t get the best score in waste production. Didn’t I just make the same mistake as Eliezer described, for completely logical reasons (maximum likelihood guess under uncertainty)? This isn’t a failure of my logic; it’s a failure of my memory.
In real life, if I expected a quiz like this, I would have STUDIED.
Why else would anyone expect an overall-best-ranking reactor to necessarily be the best at waste production?
Here’s another idea. Suppose that long boring hypothetical lecture were on top of that so confusing that the listener carries away a message that “a meltdown is when a reactor has produced more waste than its capacity.” Then it is a perfectly logical chain of reasoning that if a reactor produces less waste, then its probability of meltdown as lower. But this is poor communication, not poor reasoning.
Profound sadness, would be my answer.
On some primitive gut level you’d expect to be oddly satisfied by your own superiority, and amusedly angry at bad logic. But here’s what made me change my thinking pattern.
In college, I came across this (self-reportedly) highly-acclaimed web site of creationist science. On the front page, complete with pictures, were abstracts of young kids from a creationist science fair. There was this one girl, 6-8 year old, whose project was essentially this: She poured clean water into jars, prayed to God for six days not to create life, and at the end of six days she presented the jar as evidence against evolution. Her abstract was written with such moving sincerity, that it turned my stomach how anyone could do that to her.
EDIT: My memory of a house cat failed me (quite predictably). Thanks, David_Gerard. The blurb I actually read was most probably:
Patricia Lewis (grade 8) did an experiment to see if life can evolve from non-life. Patricia placed all the non-living ingredients of life—carbon (a charcoal briquet), purified water, and assorted minerals (a multi-vitamin) - into a sealed glass jar. The jar was left undisturbed, being exposed only to sunlight, for three weeks. (Patricia also prayed to God not to do anything miraculous during the course of the experiment, so as not to disqualify the findings.) No life evolved. This shows that life cannot come from non-life through natural processes.
I wanted to consider some truly silly solution. But since taking only box A is out (and I can’t find a good reason for choosing box A, other than a vague argument based in irrationality along the lines that I’d rather not know if omniscience exists…), so I came up with this instead. I won’t apologize for all the math-economics, but it might get dense.
Omega has been correct 100 times before, right? Fully intending to take both boxes, I’ll go to each of the 100 other people. There’re 4 categories of people. Let’s assume they aren’t bound by psychology and they’re risk-neutral, but they are bound by their beliefs.
Two-boxers who defend their decision do so on ground of “no backwards causality” (uh, what’s the smart-people term for that?). They don’t believe in Omega’s omniscience. There’s Q1 of these.
Two-boxers who regret their decision also concede to Omega’s near-perfect omniscience. There’re Q2 of these.
One-boxers who’re happy also concede to Omega’s near-perfect omniscience. There’re Q3 of these.
One-boxers who regret foregoing $1000. They don’t believe in Omega’s omniscience. There’re Q4 of these.
I’ll offer groups 2 and 3 (believers in that I’ll only get 1000) to split my 1000 between them, in proportion to their bet, if they’re right. If they believe in Omega’s perfect predictive powers, they think there’s a 0% chance of me winning. Therefore, it’s a good bet for them. Expected profit = 1000/weight-0*(all their money)>0
Groups 1 and 4 are trickier. They think Omega has a P chance of being wrong about me. I’ll ask them to bet X=1001000P/((1-P)weight)-eps, where weight is a positive number >1 that’s a function of how many people donated how much. Explicitly defining weight(Q1, Q4, various money caps) is a medium-difficulty exercise for a beginning calculus student. If you insist, I’ll model it, but it will take me more time than I’d already spent on this. So, for a person in one of these groups, expected profit = -X(1-P)+1001000P/weight = eps > 0!
So what do I have now? (Should I pray to Bayes that my intuition be confirmed?) There’re two possible outcomes of taking both boxes.
Both are full. I give the 1001000 to groups 1 and 4, and collect Q21000+Q31000000 from groups 2 and 3, which is more than 1001000 if Q3>0 AND Q2>0, or if Q3>1. This outcome has potential for tremendous profit. Call this number PIE >> 1001000.
Only A is full. I split my 1000 between groups 2 and 3, and collect X1Q1+X4Q4 from groups 1 and 4. What are X1 and X4 again? X, the amount of money group 1 and group 4 bet, is unique for each group. I called group 1’s X X1, group 4’s X4.
I need to find the conditions when X1Q1+X4Q4 > 1000. So suppose I undermaximized my profit, and completely ignored the poor group 1 (their 1000 won’t make much difference either way). Then X=X4 becomes much simpler, X=1001000P/((1-P)Q4)-eps, and then they payoff I get is -Q4eps+1001000P/(1-P). P = 0.001 and Q4eps < $2 guarantee X1Q1+X4Q4 > X4Q4 > 1000.
That’s all well and good, but if P is low (under 0.5), I’m getting less than 1001000. What can I do? Hedge again! I would actually go to people of groups 1 and 4 again, except it’s getting too confusing, so let’s introduce a “bank” that has the same mentality as the people of groups 1 and 4 (that there’s a chance P that Omega will be wrong about me). Remember PIE? The bank estimates my chances of getting PIE at P. Let’s say if I don’t get PIE, I get 1000 (which is the lowest possible profit for outcome 2; otherwise it’s not worth making that bet). I ask the following sum from the bank: PIEP+1000(1-P) – eps. The bank makes a profit of eps > 0. Since PIE is a large number, my profit at the end is approximately PIEP+1000(1-P) > 1001000.
Note that I’d been trying to find the LOWER bound on this gambit. Actually plugging in numbers for P and Q’s easily yielded profits in the 5 mil to 50 mil range.
True.
I only took that case into account for completeness, to cover my bases against the criticism that “not all one-boxers would be happy with their decisions.”
Naively, when you have a choice between 1000000.01 and 1000000.02, it’s very easy to argue that the latter is the better option. To argue for the former, you would probably cite the insignificance of that cent next to the rest of 1000000.01: that eps doesn’t matter, or that an extra penny in your pocket is inconvenient, or that you already have 1000000.01, so why do you need another 0.01?
Yes, nshepperd, my assumption is that P << 0.5, something in the 0.0001 to 0.01 range.
Besides, arbitrage would still be possible if some people estimated P=0.01 and others P=0.0001, only the solution would be messier than what I’d ever want to do casually. Besides, if I were unconstrained by the bets I could make (I’d tried to work with a cap before), that would make making profits even easier.
I wasn’t exactly trying to solve the problem, only to find a “naively rational” workaround (using the same naive rationality that leads prisoners to rat each other out in PD).
When you’re saying that this doesn’t solve Newcomb’s problem, what do you expect the solution to actually entail?
Wow, it’s a really cool insight!
I guess the natural question to ask would be: Do people ever get (genuinely) offended by anything that does not threaten their status?
Going further, I don’t know of people directing offense at animals or inanimate objects. Does the offender need to be perceived as intelligent? In that case, are people less frequently offended at those they consider stupid?
Whoops, didn’t make myself clear.
Is it the case that normal-functioning humans are (almost) never offended by something they themselves don’t perceive as a threat to status?
Since the article makes a statement, I’m trying to take it to its logical conclusion; in particular to see what outcomes it prohibits, as per . And non-status-based offenses do seem like an obvious thing it prohibits.
Let me try my own stab at a little chat with Omega. By the end of the chat I will either have 1001 K, or give up. Right now, I don’t know which.
Act I
Everything happens pretty much as it did in Polymeron’s dialogue, up until…
Me: “Aha! That means I do have a choice here, even before you have left. If I change my state so that I am unable or unwilling to two-box once you’ve left, then your prediction of my future “decision” will be different. In effect, I will be hardwired to one-box. And since I still want to retain my rationality, I will make sure that this hardwiring is strictly temporary.”
Omega: Yup, that’ll work. So you’re happy with your 1000 K?
Act II
Whereupon I try to exploit randomness.
Me: Actually, no. I’m not happy. I want the entire 1001 K. Any suggestions for outsmarting you?
Omega: Nope.
Me: Are you omniscient?
Omega: As far as you’re concerned, yes. Your human physicists might disagree in general, but I’ve got you pretty much measured.
Me: Okay, then. Wanna make a bet? I bet I can find a to get over 1000 K if I make a bet with you. You estimate your probability of being right at 100%, right? Nshepperd had a good suggestion….
Omega: I won’t play this game. Or let you play it with anyone else. I thought we’d moved past that.
Me: How about I flip a fair coin to decide between B and A+B. In fact, I’ll use ’s generator using the principle to generate the outcome of a truly random coin flip. Even you can’t predict the outcome.
Omega: And what do you expect to happen as a result of this (not-as-clever-as-you-think) strategy?
Me: Since you can’t predict what I’ll do, hopefully you’ll fill both boxes. Then there’s a true 50% chance of me getting 1001 K. My expected payoff is 1000.5 K.
Omega: That, of course, is assuming I’ll fill both boxes.
Me: Oh, I’ll make you fill both boxes. I’ll bias the ’s to 50+eps% chance of one-boxing for the expected winnings of 1000.5 K – eps. Then if you want to maximize your omniscience-y-ness, you’ll have to fill both boxes.
Omega: Oh, taking others’ suggestions already? Can’t think for yourself? Making edits to make it look like you’d thought of it in time? Fair enough. Attribute this one to gurgeh. As to the idea itself, I’ll disincentivize you from randomization at all. I won’t fill box B if I predict you cheating.
Me: But then there’s a 50-eps% chance of proving you wrong. I’ll take it. MWAHAHA.
Omega: What an idiot. You’re not trying to prove me wrong. You’re trying to maximize your own profit.
Me: The only reason I don’t insult you back is because I operate under Crackers Rule.
Omega: Crocker’s Rules.
Me: Uh. Right. Whoops.
Omega: Besides. Your ’s random generator idea won’t work even to get you the cheaters’ utility for proving me wrong.
Me: Why not? I thought we’d established that you can’t predict a truly random outcome.
Omega: I don’t need to. I can just mess with your ’s randomness generator so that it gives out pseudo-random numbers instead.
Me: You’re omnipotent now, too?
Omega: Nope. I’ll just give someone a million dollars to do something silly.
Me: No one would ever…! Oh, wait. Anyway, I’ll be able to detect tampering with randomness, the same way it’s possible with a Mersenne twister….
Omega: And I know exactly how soon you’ll give up. Oh, and don’t waste page space suggesting secondary and tertiary levels of ensuring randomness. If, to guide your behavior, you’re using the table of random numbers that I already have, then I already know what you’d do.
Me: Is there any way at all of outsmarting you and getting 1001 K?
Omega: Not one you can find.
Me: Okay then… let me consult smarter people.
This conversation is obviously not going my way. Any suggestions for Act III?
Overestimating my driving skills is obviously bad. But how about this scenario of the possibility of happiness destroyed by the truth?
Suppose, on the final day of exams, on the last exam, you think you’ve done poorly. In fact, you only got 1 in 10 questions completely right. On the other 9, you hope you’d get at least a bit of partial credit. On the other hand, all 4 of your friends (in the class of 50) think they’ve done poorly. Maybe there will be a curve? In fact, if the final exam curve is good enough, you might even get an A for the course.
The grade goes online at 6 PM. It’s already there, and it won’t change.
So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.
Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).
My “solution” to this problem (probably irrational?) is in the spirit of “The other way is closed.” I look.
To maximize utility, I shouldn’t look at the grade until tomorrow morning. Some people don’t. I haven’t, once, and it didn’t bother me too much that I haven’t. And after bad grades, the outcome was usually pretty much as expected. So I know my utility function. That’s not the reason.
This is like the two-box decision of Newcomb’s problem. Rationally (according to Eliezer) you would pick one box. I’m not rational. I pick two. What’s there, is already there.
I. JUST. CAN’T. NOT. LOOK.
I know what you mean. I get that all the time, with all of the unsolved math problems I occasionally look at. And since my name isn’t on wikipedia yet, I haven’t solved any of them.
Although, in this case I would argue that we’re better off knowing we’re wrong, than being happy for the wrong reasons. The happiness at an end-of-semester party comes from a different source (socializing, having fun, etc), which are, dare I say, the “right” reasons. Destroying this happiness by the truth will not lead to the discovery of more truth, as it were (the grade is already there). Destroying the happiness over a mistake at least lets you find truth in acknowledging such mistake.
But then again, if I have a “brilliant” idea, I start working on it immediately, without giving myself much of a chance to bask in its brilliance.
Maybe the experimenters missed <yet another brilliant idea proven wrong in the last century>? Just kidding. What I ask instead is, Do people ever not suffer from conjunction bias?
I read about this experiment a couple years ago, about logic and intuition. (I’m writing from memory here, so it’s likely I screwed something up). People were given logical rules, and asked to find violations. Something like:
(Rule) If you are under 21, you can’t buy alcohol.
Bob is 24 and buys alcohol. (That’s not a violation)
Tom is 18 and buys alcohol. (Most people spotted this violation).
(Rule) If you go to France, you can’t drive a car.
Bob goes to France and takes a subway. (Not a violation).
Tom goes to France and drives. (Fewer people spotted this violation).
Of course it wasn’t easy like here, with a rule and a violation right next to each other. The rules were phrased more cleverly, too.
Anyway, people were better at logic when the situation was more intuitive. I wonder if any experiments have been done in which (untrained) people demonstrated a likewise absence of conjunction bias?
Maybe something like below would work, when you’re pointing out that (T) and (F) are occurring together.
Linda is 31 years old…
Please rank…
(F) Linda is active in the feminist movement.
(T) Linda is a bank teller.
(L) Both (T) and (F)
And if that doesn’t work… well, maybe better minds than mine had ALREADY done an experiment. Any suggestions for further reading, anyone? Summaries greatly appreciated.
- 14 Dec 2010 6:06 UTC; 3 points) 's comment on Honours Dissertation by (
I would think that an ideal rationalist’s mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.
Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there’s a curve, I’ll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).
My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I’d checked the grade is 1.7 + 0.2 = 1.9.
If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I’m risk-neutral).
What’s the purpose of imprisonment in the first place?
To guard the society from criminals.
To punish the criminals (revenge on the behalf of the relatives/victims).
To redeem the criminals (so that they don’t commit another crime).
The way Harry’s been acting, seems like he’d strongly prioritize #3 over the other two. And considering that he didn’t hold too much of a grudge against Draco for gom jabar’ing him, and believed that Bellatrix can be turned back into an okay human being, it seems like he would want to devise some sort of a method to redeem criminals.
And thanks, TobyBartels, for noticing my circular vocabulary issue.
Sorry if I’m hijacking the thread, but I’m in much the same situation. New and don’t know what I’m doing. And not getting much feedback other than a couple random upvotes for seemingly nothing.
(Well, at least I’m past asking how to insert a hyperlink).
One question bugged me for a while now: what’s a “top level comment” and is it some kind of a privilege to make one? Is it the article itself, or a comment that’s not a reply to a comment? (Since no one got mad at me yet, I either haven’t made one, or nobody noticed).
Also: what’s the etiquette on editing after someone’s pointed out a flaw in my post? It reduces on verbal clutter if I just went back and edited, but might put the rest of the comments out of context. Especially if it’s a major reasoning flaw that you can’t just put in under “EDIT: ”
And also: if I accepted someone’s correction and edited, should I add to verbal clutter by posting “thanks,” or do my actions (edit + upvote) give enough evidence to the fact that I’m thankful?
I’m still in the process or reading the entirety of LW, hopefully before the links turn back to green and I lose track of what I’d read and what I hadn’t. I comment sometimes, but most of my idea flood stays back in MS Word, to ripen or to wither.
So how will I know if I’m doing something wrong?
Oh, and I operate by Croker’s Rules, so just tell me.
It’s surprising nobody has brought up HPatMoR. Look at the comments in the welcome thread. Harry Potter is a very effective recruiter.
As a rough feel (and not a statistical inference), looks like HPatMoR attracts a much more diverse readership to LW than the takers of that old survey.
Example: look at me. You’d filter me off on two of those categories, both of which happen to be 92+% filters, and one is specifically mentioned in the welcome thread as a warning to turn back. And yet this blog is one of the most addictive finds on the internet.
Maybe you shouldn’t underestimate the readership by the periphery.
Speaking of filters, what do you mean by
Believes in evolution | Atheist/Agnostic: 24%
Should I take it to mean that 24% of Atheists / Agnostics don’t believe in evolution? That’s a surprising number.
“Oh my gosh! ‘The Sun goes around the Earth’ is true for Hunga Huntergatherer, but for Amara Astronomer, ‘The Sun goes around the Earth’ is false! There is no fixed truth!” The deconstruction of this sophomoric nitwittery is left as an exercise to the reader.
Am I correct that this sophomoric nitwittery can be solved by taking Earth as a fixed point? Then sun really will go around it. So will the moon. All other planets will go around the sun.
If not, well… you can imagine why I didn’t get an A in that philosophy where a teacher meant it literally (as in relativism)
Okay. Demographics. Boring stuff. Just skip to the next paragraph. I’m a masters student in mathematics (hopefully soon-to-be PhD student in economics). During undergrad, I majored in Biology, Economics and Math, and minored in Creative Writing (and nearly minored in Chemistry, Marine Science, Statistics and PE) … I’ll spare you the details, but most of those you won’t see on my resume for various reasons. Think: Master of None, not Omnidisciplinary Scientist.
My life goal is to write a financially self-sustainable computer game… for reasons I’ll keep secret for now. Seems like I’m not the first one in this thread to have this life goal.
I found LW through Harry Potter & MOR. I’d found HP&MOR through Tvtropes. I’d found Tvtropes through webcomic The Meek. I’d found The Meek through The Phoenix Requiem. Which I’d found through Top Web Comics site. That’s as far as I remember, 2 years ago.
I haven’t read most of the site, so far only about Bayes and the links off of that. And I’d started reading Harry Potter 3 weeks ago. So as far as you can see, I’m an ignorant newbie who speaks first and listens second.
I don’t identify myself as a rationalist. Repeat: I DO NOT identify myself as a rationalist. I didn’t notice that I’m different from everyone else when I was eleven. Or twelve. Or thereafter. I’m not smart enough to be a rationalist. I don’t mean that in Socratic sense, “I know nothing, but at least I know more than you, idiot.” I mean I’m just not smart.. I have the memory of a house cat. I can’t name-drop on cue. I’m irrational. And I have BELIEFS (among them emergence, and when I model them, it’ll be a Take That but for now it’s just a belief).
Oh, and my name is a reference to BOTH Baldur’s Gate 2, and to my intention of trying to challenge everything on this blog (what’s my alternative? mindlessly agree?), and to how morons can’t add 1+1.