An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking “will I?” rather than telling yourself “I will” can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.
mindviews
it’s basically saying that gravity and EM are both obeying some more general law
No, what’s happening is that under certain approximations the two are described by similar math. The trick is to know when the approximations break down and what the math actually translates to physically.
Does it suggest a way to unify gravity and EM?
No.
Keep in mind that for EM there are 2 charges while gravity has only 1. Also, like electric charges repel while like gravitic charges attract. This messes with your expectations about the sign of an interaction when you go from one to the other. That means your intuitive understanding of EM doesn’t map well to understanding gravity.
Hi all—been lurking since LW started and followed Overcoming Bias before that, too.
I plan on coming.
I’ll be there. I’ve got space for 3 more in my car. If anyone in the Pasadena/Glendale area would like a ride, let me know.
Is there any philosophy worth reading?
Yes. I agree with your criticisms—“philosophy” in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you’re asking the question in this forum I am assuming you’re looking for something of use/interest to a rationalist.
So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?
I’ve developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to “Representation and Reality” he takes a moment to note, “I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced.” In short, as a rationalist I find reading his work very worthwhile.
I also liked “Objectivity: The Obligations of Impersonal Reason” by Nicholas Rescher quite a lot, but that’s probably partly colored by having already come to similar conclusions going in.
PS—There was this thread over at Hacker News that just came up yesterday if you’re looking to cast a wider net.
Were you thinking of “Affirmative Action Isn’t About Uplift”?
http://www.overcomingbias.com/2009/07/affirmative-action-wasnt-about-uplift.html
I agree that the single comment view has more boilerplate up top, but otherwise I’d say it usually fits on screens without any trouble.
I was curious about your comment so I took a look at the screenshot. You say in the bug report that you’re using a “fairly small font” setting but the font is being rendered much larger for you than I see using default IE9 and FF4 settings. Plus your picture shows the page with a serif font while the CSS specifies sans-serif. I’m not sure if it’s a browser issue or if you’re using custom settings, but in a 1600x900 view (as your screenshot size is), I can see the full comment without scrolling.
Mostly I’d like to know if other people “take 2 screens to see one permalinked comment” because I agree that reasonably short comments should be visible without scolling.
Well, I suppose you could launch them out of our future light cone.
I hope that was a joke because that doesn’t square with our current understanding of how physics works...
I don’t think that’s a good example. For the status-quo bias to be at work we need to have the case that we think it’s worse for people to have both less personal responsibility and more personal responsibility (i.e., the status-quo is a local optimum). I’m not sure anyone would argue that having more personal responsibility is bad, so the status-quo bias wouldn’t be in play and the preference reversal test wouldn’t apply. (A similar argument works for the current rate of heroin addiction not being a local optimum.)
I think the problem in the example is that it mixes the axes for our preferences for people to have personal responsibility and our preferences for people not to be addicted to heroin. So we have a space with at least these two dimensions. But I’ll claim that personal responsibility and heroin use are not orthagonal.
I think the real argument is in the coupling between personal responsibility and heroin addiction. Should we have more coupling or less coupling? The drug in this example would make for less coupling. So let’s do a preference reversal test and see if we had a drug that made your chances of heroin addiction more coupled to your personal responsiblity, would you take that? I think that would be a valid preference reversal test in this case if you think the current coupling is a local optimum.
First off, let me say thank you for all the work that’s gone into the site update by everyone involved! The three changes I like most are the new header design (especially the clear separation between Main and Discussion—the old menu was too cluttered), the nearby meetup section, and the expanding karma bubbles.
I had one question about how the nearby meetup list and Location. Is the meetup list supposed to sort by location somehow? If so, what do I need to put in my location? Thanks!
Sorry I can’t make it this time—I’ve got travel plans this weekend. Hope to see everyone next time.
Count me in.
I’ll make a weak vote for the IHOP near UCI. It’s easy to get to, has free parking, and seemed to work reasonably well for the last meetup.
I’ll be there. I’ll be driving from Torrance and can give a ride to anyone who happens to be in that area or along the way.
I tried something different and added a link to this section. Any comments on how that works?
I got an amazing amount of use out of Order of Magnitude Physics. It can get you in the habit of estimating everything in terms of numbers. I’ve found that relentlessly calculating estimates greatly reduces the number of biased intuitive judgments I make. A good class will include a lot of interaction and out-loud thinking about the assumptions your estimates are based on. Also or as an alternative, a high-level engineering design course can provide many of the same experiences within the context of a particular domain. (Aerospace/architecture/transportation/economic systems can all provide good design problems for this type of thinking—oddly, I haven’t yet seen a computer science design problem example that works as well.)
Also, I’ll second recommendations for just about any psychology course. And anywhere you see a course cross-listed between psychology and economics you’ll have a good chance of learning about human bias.
The morals of FAI theory don’t mesh well at all with the morals of transhumanism.
It’s not clear to me that a “transhuman” AI would have the same properties as a “synthetic” AI. I’m assuming that a transhuman AI would be based on scanning in a human brain and then running a simulation of the brain while a synthetic AI would be more declaratively algorithmic. In that scenario, proving a self-modification would be an improvement for a transhuman AI would be much more difficult so I would treat it differently. Because of that, I’d expect a transhuman AI to be orders of magnitude slower to adapt and thus less dangerous than a synthetic AI. For that reason, I think it is reasonable to treat the two classes differently.
It’s a social gathering for anyone interested in discussing anything relevant to the LW community. I personally have been part of discussing rationality in general, cryonics, existential risk, personal health, and cognitive bias (among other topics) at the 2 meetups I’ve been to. It’s a good excuse to meet some other folks and trade ideas, start projects, etc.
I don’t think we have an agenda organized for this one. But if you’re curious, take a look at the comments from the September SoCal meetup for an idea about what was discussed and what people thought was good/bad/interesting about it.
For those of you who are interested, some of us folks from the SoCal LW meetups have started working on a project that seems related to this topic.
We’re working on building a fault tree analysis of existential risks with a particular focus on producing a detailed analysis of uFAI. I have no idea if our work will at all resemble the decision procedure SIAI used to prioritize their uFAI research, but it should at least form a framework for the broader community to discuss the issue. Qualitatively you could use the work discuss the possible failure modes that would lead to a uFAI scenario and quantitatively you can could use the framework and your own supplied probabilities (or aggregated probabilities from the community, domain experts, etc.) to crunch the numbers and/or compare uFAI to other posited existential risks.
At the moment, I’d like to find out generally what anyone else thinks of this project. If you have suggestions, resources or pointers to similar/overlapping work you want to share, that would be great, too.