The post was Polyamory is Boring btw, in case anyone else is curious.
ahbwramc
Confidence levels inside and outside an argument seems related.
See, if anything I have the exact opposite problem (which, ironically, I also attribute to arrogance). I almost never engage in arguments with people because I assume I’ll never change their mind. When I do get into a debate with someone, I’m extremely quick to give up and write them off as a lost cause. This probably isn’t a super healthy attitude to have (especially since many of these “lost causes” are my friends and family) but at least it keeps me out of unproductive arguments. I do have a few friends who are (in my experience) unusually good at listening to new arguments and changing their mind, so I usually wind up limiting my in-depth discussions to just them.
I can empathize to an extent—my fiance left me about two months ago (two months ago yesterday actually, now that I check). I still love her, and I’m not even close to getting over her. I don’t think I’m even close to wanting to get over her. And when I have talked to her since it happened, I’ve said things that I wish I hadn’t said, upon reflection. I know exactly what you mean about having no control of what you say around her.
But, with that being said...
Well, I certainly can’t speak for the common wisdom of the community, but speaking for myself, I think it’s important to remember that emotion and rationality aren’t necessarily opposed—in fact, I think that’s one of the most important things I’ve learned from LW: emotion is orthogonal to rationality. I think of the love I have for my ex-fiance, and, well...I approve of it. It can’t be really be justified in any way (and it’s hard to even imagine what it would mean for an emotion to be justified, except by other emotions), but it’s there, and I’m happy that it is. As Eliezer put it, there’s no truth that destroys my love.
Of course, emotions can be irrational—certainly one has to strive for reflective equilibrium, searching for emotions that conflict with one another and deciding which ones to endorse. And it seems like you don’t particularly endorse the emotions that you feel around this person (I’ll just add that for myself, being in love has never felt like another persons values were superseding my own—rather it felt like they were being elevated to being on par with my own. Suddenly this other person’s happiness was just as important to me as my own—usually not more important, though). But I guess my point is that there’s nothing inherently irrational about valuing someone else over yourself, even if it might be irrational for you.
Survey complete! I’d have answered the digit ratio question, but I don’t have a ruler of all things at home. Ooh, now to go check my answers for the calibration questions.
Scott is a LW member who has posted a few articles here
This seems like a significant understatement given that Scott has the second highest karma of all-time on LW (after only Eliezer). Even if he doesn’t post much here directly anymore, he’s still probably the biggest thought leader the broader rational community has right now.
It’s been a while; any further updates on this project? All the BGI website says is that my sample has been received.
Okay, fair enough, forget the whole increasing of measure thing for now. There’s still the fact that every time I go to the subway, there’s a world where I jump in front of it. That for sure happens. I’m obviously not suggesting anything dumb like avoiding subways, that’s not my point at all. It’s just...that doesn’t seem very “normal” to me, somehow. MWI gives this weird new weight to all counterfactuals that seems like it makes an actual difference (not in terms of any actual predictions, but psychologically—and psychology is all we’re talking about when assessing “normality”). Probably though this is all still betraying my lack of understanding of measure—worlds where I jump in front of the train are incredibly low measure, and so they get way less magical reality fluid, I should care about them less, etc. I still can’t really grok that though—to me and my naive branch-counting brain, the salient fact is that the world exists at all, not that it has low probability.
I’ve never been entirely sure about the whole “it should all add up to normality” thing in regards to MWI. Like, in particular, I worry about the notion of intrusive thoughts. A good 30% of the time I ride the subway I have some sort of weak intrusive thought about jumping in front of the train (I hope it goes without saying that I am very much not suicidal). And since accepting MWI as being reasonably likely to be true, I’ve worried that just having these intrusive thoughts might increase the measure of those worlds where the intrusive thoughts become reality. And then I worry that having that thought will even further increase the measure of such worlds. And then I worry...well, then it usually tapers off, because I’m pretty good at controlling runaway thought processes. But my point is...I didn’t have these kinds of thoughts before I learned about MWI, and that sort of seems like a real difference. How does it all add up to normality, exactly?
Perhaps (and I’m just thinking off the cuff here) rationality is just the subset of general intelligence that you might call meta-intelligence—ie, the skill of intelligently using your first-order intelligence to best achieve your ends.
I remember being inordinately relieved/happy/satisfied when I first read about determinism around 14 or 15 (in Sophie’s World, fwiw). It was like, thank you, that’s what I’ve been trying to articulate all these years!
(although they casually dismissed it as a philosophy in the book, which annoyed 14-or-15-year-old me)
I think purely from a fundamental attribution error point of view we should expect the average “stupid” person we encounter to be less stupid than they seem.
(which is not to say stupidity doesn’t exist of course, just that we might tend to overestimate its prevalence)
I guess the other question would be, are there any biases that might lead us to underestimate someone’s stupidity? Illusion of transparency, perhaps, or the halo effect? I still think we’re on net biased against thinking other people are as smart as us.
Thanks for writing this. I want to really dig into this paper and make sure I understand it, but it certainly seems like an interesting approach. I’m curious why you say this, though:
Evidently, this approach suffers from a number of limits: the first and the most evident is that it works only in a situation where the system to be observed has already decohered with an environment. It is not applicable to, say, a situation where a detector reads a quantum superposition directly, e.g. in a Stern-Gerlach experiment.
Maybe I’m misunderstanding you, but I thought they addressed this issue:
(from the longer companion paper)
Actually, self-locating uncertainty is generic in quantum measurement. In Everettian quantum mechanics the wave function branches when the system becomes suciently entangled with the environment to produce decoherence. The normal case is one in which the quantum system interacts with an experimental apparatus (cloud chamber, Geiger counter, electron microscope, or what have you) and then the observer sees what the apparatus has recorded. For any realistic room-temperature experimental apparatus, the decoherence time is extremely short: less than 10^20 seconds. Even if a human observer looks at the quantum system directly, the state of the observer’s eyeballs will decohere in a comparable time. In contrast, the time it takes a brain to process a thought is measured in tens of milliseconds. No matter what we do, real observers will nd themselves in a situation of self-locating uncertainty (after decoherence, before the measurement outcome has been registered).
As long as there is macroscopic decoherence before the observer has time to register any thoughts, the approach seems to hold, and that’s certainly the case for Stern-Gerlach experiments.
“Once upon a time there lived a pink unicorn in a big mushroom house with three invisible potatoes. Could you finish the story for me in a creative way and explain why the unicorn ended up painting the potatoes pink?”
Well obviously, the unicorn did it to satisfy the ghost of Carl Sagan, who showed up at the unicorn’s house and started insisting that the potatoes weren’t there. Annoyed, she tried throwing flour on the potatoes to convince him, but it turned out the potatoes really were permeable to flour. It was touch and go for a while there, and even the unicorn started to doubt the existence of her invisible potatoes (to say nothing of her invisible garden and invisible scarecrow—but that at least had done an excellent job of keeping the invisible birds away). Eventually, though, it was found that pink paint coated the potatoes just fine, and so Carl happily went back to his post co-haunting the Pioneer 10 probe. The whole affair turned out to be a boon for the unicorn, as the pink paint put a stop to a previously unfalsifiable dragon, who had been eating her potatoes (or so she suspected—she had never been able to prove it). The dragon, for his part, simply went back to his old habit of terrorizing philosopher’s thought experiments.
Filled it in as well.
I find most interesting the question of which God/religion to believe in. How do they deal with the fact that the actual, historical reason that they believe in their specific God/religion is because they were born into it (most likely—not true for everyone). Have they ever considered switching religions? What was their reason not to do so?
Seconded.
Well, to be honest it did come across that way to me, although that’s partly because I was framing it in the context of your last physics post, which was advocating against physics. I assumed this was sort of a continuation of that line of thought. Reading it again though, you’re right, this one is maybe not as anti-physics as I first thought. The earnings section does seem to emphasize the negative more than is necessary though, I think.
Well, to be fair, you’re not exactly painting the bleakest picture here. I mean, physics is ninth on that list of mid-career earnings out of 129 majors, and is pretty much indistinguishable from computer science. An extra $10000/year or whatever on top of an already pretty good salary doesn’t hold much appeal to me—certainly not enough to make me wish I had done engineering. Having said that though, you’re right—if you do physics, you probably won’t get a job in physics, and you’ll probably make less than you would had you done engineering. This is valuable information and smart high school students should definitely know it.
I wonder, though, to what extent physics degrees are actually displacing engineering degrees. They surely are to some extent—if physics were eliminated as a major tomorrow, no doubt a good fraction of physicists would migrate over to engineering. And then, yes, they would be better off than before in terms of earnings potential. But plenty would go to other majors with even lower earning potentials, like applied math ($96200), math ($88800), chemistry ($84100), or philosophy ($78300). So if you’re advocating for people not to go into physics, I would say you should be very clear about what alternatives you’re recommending. In many ways physics seems like a pretty good compromise, if you’re intellectually curious—you get to study an interesting subject for four years, and then make almost as much as an engineer.
- Apr 4, 2014, 7:54 PM; 0 points) 's comment on Career prospects for physics majors by (
Continuing the use of LW as my source for non-fiction recommendations...
Any suggestions on a decent popular-but-not-too-dumbed-down intro to Economics?
A number of SSC posts have gone viral on Reddit or elsewhere. I’m sure he’s picked up a fair number of readers from the greater internet. Also, for what it’s worth, I’ve turned two of my friends on to SSC who were never much interested in LW.
But I’ll second it being among my favourite websites.