Eliezer’s first post on Overcoming Bias was, as far as I know, The Martial Art of Rationality. I think that title works well to set the tone.
Slackson
I write up comments and delete them because I think they’re obvious or meaningless more often than I actually post.
Looking at a primary school curriculum could help with this.
Yes, it’s a Newcomb-like problem. Anything where one agent predicts another is. People predict other people, with varying degrees of success, in the real world. Ignoring that when looking at decision theories seems silly to me.
Foc.us is a commercially available tDCS system marketed to gamers, and at a price that is almost affordable, depending on the actual benefits of the device. Does anyone here have experience, expertise, our any other insight with regards to this?
I suspect that the intent of the original quote is that they’ll assess us by our curiosity towards, and effectiveness in discovering, our origins. As Dawkins is a biologist, he is implying that evolution by natural selection is an important part of it, which of course is true. An astronomer or cosmologist might consider a theory on the origins of the universe itself to be more important, a biochemist might consider abiogenesis to be the key, and so on.
Personally, I can see where he’s coming from, though I can’t say I feel like I know enough about the evolution of intelligence to come up with a valid argument as to whether an alien species would consider this to be a good metric to evaluate us with. One could argue that interest in oneself is an important aspect of intelligence, and scientific enquiry important to the development of space travel, and so a species capable of travelling to us would have those qualities and look for them in the creatures they found.
This is my time posting here, so I’m probably not quite up to the standards of the rest of you just yet. Sorry if I said something stupid.
Hi, LessWrong.
There isn’t too much to say about me. I’m a Kiwi 16 year old high school student who’s been interested in a lot of the topics discussed here for a long time. I stumbled across HPMoR a few months ago. After reading through that, I came here and now I’ve read through pretty much all of the sequences. I’m definitely getting better at decision making and evaluating information, but I don’t think I’m at the same level as most of you just yet.
I’m going to be busy for the next couple of months with exams, and then a trip to Ecuador, but hopefully when I get back I’ll be able to take part in the community properly. I have a bad habit of being unnecessarily shy, even online, with people I have respect for. I’m going to try to change that this time. It should be easier than it has been in the past, because I have a lot of questions to ask, and sometimes even ideas to add to the conversation.
Cheers.
This doesn’t always apply. It can, for example, leave you with an hour to kill at a train station, because you decided it would be really embarrassing to show up late for your ride to a CFAR workshop because of the planning fallacy.
Stuff I learned at the Melbourne CFAR workshop. Class name was offline habit training, i.e. actually performing your habit multiple times in a row, in response to the trigger. Salient examples: Practicing getting out of bed in response to your alarm, practice walking in the door and putting your keys where they belong, practice putting your hands on your lap when about to bite nails, practice straightening your neck when you notice you’re hunched. These are all examples I’ve implemented, and I have had good results.
Adding associations is a key part, too. For these examples, I imagine the alarm as an air raid siren and my house getting bombed if I don’t get out of bed on time. I imagine Butch being shot by Vincent in an alternate version of Pulp Fiction if his father’s watch wasn’t on the little kangaroo and he had to hunt around for it. For biting my nails, I imagine Mia Wallace being stabbed in the heart . The connection here is biting nails can make you sick. The vividness and intensity makes up for how tenuous that is. For posture, I imagine Gandalf the Grey compared to Gandalf the White (plus triumphant LoTR music).
Since I made that comment, I got about a third of the way through Moonwalking With Einstein, and practiced the Memory Palace/method of loci a couple of times. I’ve lived in a bunch of different houses, so that works pretty well for me. Some of the stuff that was mentioned sounds a lot like spacing techniques. “”[...] if you revisit the journey through your memory palace later this evening, and again tomorrow afternoon, and perhaps again a week from now, this list will leave a truly lasting impression.”
This is another bit of evidence suggesting that spaced repetition would be powerful in combination with mnemonics. What Anki provides, which is far more important than the flashcard thing, is testing. I’ve been thinking about applying some of the ideas from test-driven development to self-programming, and Anki cards would be a core part of that.
Sorry, I realize most of that isn’t relevant, but I hope the parts that were are useful.
Once EA is a popular enough movement that this begins to become an issue, I expect communication and coordination will be a better answer than treating this like a one-shot problem. Maybe we’ll end up with meta-charities as the equivalent of index funds, that diversify altruism to worthy causes without saturating any given one. Maybe the equivalent of GiveWell.org at the time will include estimated funding gaps for their recommended charities, and track the progress, automatically sorting based on which has the largest funding gap and the greatest benefit.
I doubt that at any point it will make sense for individuals should be personally choosing, ranking, and donating their own money to charities as if they’re choosing the ratios for everyone TDT-style, not least because of the unnecessary redundancy.
EDIT: Upvoted because it is a valid concern. The AMF reached saturation relatively quickly, and may have exceeded the funding it needed. I just disagree with the efficiency of this particular solution to the problem.
Didn’t the paper show TDT performing better than CDT in Parfit’s Hitchhiker?
Opportunity costs. We’d prefer it was spent on a Mars colony than on most things, but if we’re spending money on x-risk reduction, it might not be the most cost-effective way to help.
The obvious answer from this crowd is some kind of prediction market, with the “group charter” being turned into a measurable utility function with which to make the judgments about the success or failure of a policy. If people are restricted to using only money from an equal “allowance”, plus whatever they have earned from predictions, over time those who have made more accurate predictions gain the most influence on the outcomes of the decisions.
This is the novel. Dr_Manhattan linked to an essay by Roger Williams, which discusses where his novel intersects with the sort of things organizations like SIAI are looking at.
I’ve started a blog, and I’m kind of unreasonably shy about it. Especially given that it’s, you know, a blog.
Not the right term for what’s happening. Deflationary spiral refers to low demand reducing prices, which reduces production, which reduces the employment rate/average wage, which reduces demand. The bitcoin economy is not large enough for this to be the case. Rather, it appears to be a speculative bubble, where people predict the price will go up, so more people buy it, and so the price goes up, etc. Then enough people at once go “this is as far as the train’s going” and everybody panics and tries to sell and the price crashes.
Since bitcoin is a currency experiencing deflation due to a cyclic process, “deflationary spiral” would sort of make sense if it didn’t already refer to another specific phenomenon.
Implicit-association tests are handy for identifying things you might not be willing to admit to yourself.
This is awesome. Thanks for doing all that work.
Can blackmail kinds of information be compared to things like NashX or Mutually Assured Destruction usefully?
Most of my friends have information on me which I wouldn’t want to get out, and vice versa. This means we can do favours for each other that pay off asynchronously, or trust each other with other things that seem less valuable than that information . Building a friendship seems to be based on gradually getting this information on each other, without either of us having significantly more on one than the other.
I don’t think this is particularly original, but it seems a pretty elegant idea and might have some clues for blackmail resolution.
Nonsane would be better, I think. Whereas unsane suggests a strongly opposite to sane, nonsane suggests a mere lack of sanity. It also looks like it might be related to nonsense, which is a common product of nonsanity.