Taken the survey. Thanks for doing this, Yvain.
luminosity
Quora hack: Add ‘?share=1’ to the end of the url, and you can read everything.
I’ve always had problems dealing with negative emotions, in that once I am in the negative emotion it changes my decision making such that it is hard to break out of it. For example, I get angry, and even though I know I will feel stupid about it later on, it feels so good to be angry that I stay angry. And then I feel stupid about it.
This last week, I got really annoyed at something trivial enough I no longer even remember what. For the first time ever, I just asked myself what the point of getting annoyed was, why would I want to inconvenience myself with annoyance, and what I could do to ensure that this annoyance wouldn’t occur again. Almost immediately, my annoyance went away and I felt good again.
For what it’s worth, I’ve always perceived you as upbeat and positive from those comments of yours I’ve read on this site.
Fantastic post. Usually with posts along the lines of AI & epistemology I just quickly scan them as I expect them to go over my head, or descend straight into jargon, but this was extremely well explained, and a joy to follow.
If you want to cut down on an activity without eliminating it completely, consider delegating the decision to a random mechanism.
As a concrete example, I’ve tried for the last year or so to cut down on the amount of sugared snacks I eat, which used to happen essentially daily. I tried only doing it on set days of the week, but on other days it felt unfair that Monday!Lachlan got to eat a custard tart and Tuesday!Lachlan had to miss out. I tried giving myself a budget for the week, but I tended to blow through it early, feel guilty, and then usually break it later in the week anyway. I switched recently to using a dice app on my smartphone. If I have the urge to eat a sugared snack, I roll a 4 sided die. If it comes up 4, I can eat it. This has been working successfully for me for 3 weeks now, with no sign of it breaking.
It makes me feel not guilty when I do come up 4, since it was the dice that let me eat something, and I know over the long term it will average out to the amount of eating I want. If I miss out, I don’t have myself to blame or get annoyed at, because it’s just the vagaries of an RNG.
Please consider changing the font to the site default. As trivial as it might seem, I was much more reluctant to read this than the typical post due entirely to the font.
Don’t forget Australia. We have a few, large cities separated by long distances. In particular, Melbourne to Sydney is one of the highest traffic air routes in the world, roughly the same distance as the proposed Hyperloop, and there has been on and off talk of high speed rail links. Additionally, Sydney airport has a curfew, and is more or less operating at capacity. Offloading Melbourne-bound passengers to a cheaper, faster option would free up more flights for other destinations.
I feel that perhaps you haven’t considered the best way to maximise your chance of developing Friendly AI if you were Eliezer Yudkowsky; your perspective is very much focussed on how you see it lookin in from the outside. Consider for a moment that you are in a situation where you think you can make a huge positive impact upon the world, and have founded an organisation to help you act upon that.
Your first, and biggest problem is getting paid. You could take time off to work on attaining a fortune through some other means but this is not a certain bet, and will waste years that you could be spending working on the problem instead. Your best bet is to find already wealthy people who can be convinced that you can change the world, that it’s for the best, and that they should donate significant sums of money to you, unless you believe this is even less certain than making a fortune yourself. There’s already a lot of people in the world with the requisite amount of money to spare. I think seeking donations is the more rational path.
Now, given that you need to persuade people of the importance of your brilliant new idea which no one has really been considering before, and that to most people isn’t at all an obvious idea. Is the better fund seeking strategy to admit to people that you’re uncertain if you’ll accomplish it, and compound that on top of their own doubts? Not really. Confidence is a very strong signal that will help you persuade people that you’re worth taking seriously. You asking Eliezer to be more publically doubtful probably puts him in an awkward situation. I’d be very surprised if he doesn’t have some doubts, maybe he even agrees with you, but to admit to these doubts would be to lower the confidence of investors in him, which would then lower further the chance of him actually being able to accomplish his goal.
Having confidence in himself is probably also important, incidentally. Talking about doubts would tend to reinforce them, and when you’re embarking upon a large and important undertaking, you want to spend as much of your mental effort and time as possible on increasing the chances that you’ll bring the project about, rather than dwelling on your doubts and wasting mental energy on motivating yourself to keep working.
So how to mitigate the problem that you might be wrong without running into these problems? Well, he seems ot have done fairly well here. The SIAI has now grown beyond just him, giving further perspectives he can draw upon in his work to mitigate any shortcomings in his own analyses. He’s laid down a large body of work explaining the mental processes he is basing his approaches on, which should be helpful both in recruitment for SIAI, and in letting people point out flaws or weaknesses in the work he is doing. Seems to me so far he has laid the groundwork out quite well, and now it just remains to see where he and the SIAI go from here. Importantly, the SIAI has grown to the point where even if he is not considering his doubts strongly enough, even if he becomes a kook, there are others there who may be able to do the same work. And if not there, his reasoning has been fairly well laid out, and there is no reason others can’t follow their own take on what needs to be done.
That said, as an outsider obviously it’s wise to consider the possibility that SIAI will never meet its goals. Luckily, it doesn’t have to be an either/or question. Too few people consider existential risk at all, but those of us who do consider it can spread ourselves over the different risks that we see. To the degree which you think Eliezer and the SIAI are on the right track, you can donate a portion of your disposable income to them. To the extent that you think other types of existential risk prevention matter, you can donate a portion of that money to the Future of Humanity Institute, or other relevant existential risk fighting organisation.
Anecdotal, but my experience of pair programming is that it’s incredibly useful for picking up bugs as they are laid down rather than having to dig them up later. Not to say that being monitored working doesn’t help, but finding and removing bugs is by far the hardest and most expensive part of programming.
I love the list of predictions, but I also feel fairly confident in predicting that this post won’t prompt me to actually make more (or more useful) predictions. Do you have any tips on building the habit of making predictions?
Always put headphones on when focussing on some work (in a team environment). Even if you don’t play any music to block out distractions, having them on signals that you’re busy and makes people less likely to interrupt you. You’ll find those interruptions that remain are a higher quality of interruption where your help is actually needed instead of just slightly easier than figuring it out for themself.
What does a politician who advances to the point of being able to make decisive choices most favour? Getting their policies across? Yes, but I would argue generally by the time politician makes it to a place where they can enact their policies, they might have one or two they’re willing to risk losing that power for—if you’re lucky! Generally enacting good decisions comes second to ensuring you’ll get back in next time.
If a politician highly values re-election, and retaining power, then voting on the basis of intelligence, rationality, etc is unlikely to result in the best decisions being made policy wise. Instead it is likely to result in the best decisions being made to increase that politician’s popularity, or otherwise result in the greatest chance of re-election. In this case, intelligence and rationality can easily become tools that actually distort that politician from making good policy decisions, because he or she can easily see the outcome of them.
For example, take a look at drug policy. To ther best of my knowledge, all evidence points that illegalising substances does little to reduce their use, and instead wastes taxpayer money, increases the power of organised crime and increases harm to those who choose to use illegalised substances. None of these outcomes are desirable, but poor drug policy persists. Why? Because it would cost a lot of political capital to address, results would take time to filter through, and your political rivals can hit you hard with populist nonsense in the mean time which will weaken your position electorally. Any intelligent person who values their own re-election is probably not going to risk implementing such a policy, even if all other considerations aside they thought it was for the best.
Sydney Rationality Dojo—July
I seem to have run into my karma limit, downvoting. I really don’t like the way this system works. I upvote more than I downvote so I don’t think I’m being exceedingly negative. If I want to apply a fairly reasonable level of quality control the system quickly stops me from doing so, leaving me to either stop that quality control, or not apply the same level of discernment to my own comment posting, in order to gain more karma. It seems a rather perverse incentive.
- 10 Jul 2011 16:38 UTC; 2 points) 's comment on Guardian column on ugh fields, mentions LW by (
I’ve long been a critic of experience point / levelling systems in RPGs because of this. They optimise for wanting to be a sociopath. The guy who slaughters everything possible becomes the most powerful. I found Vampire: Bloodlines an interesting alternative, in that you were rewarded skill points for finishing quests, and you’d get the same reward whether you slaughtered everyone, snuck through, or any other way of solving the problem.
As for side quests, I guess the problem is that the developers spend an enormous amount of time generating them all, and don’t want to see that time as essentially wasted, especially since a large number of people don’t do them anyway. Considering just how expensive a modern AAA game has become to create, it’s hard to imagine you could persuade RPG developers to punish people for undertaking side quests, even if it does lead to the ridiculous situations where you’re supposedly racing against time to save the world/galaxy/universe, but have time to help every kitten stuck in a tree on the way.
I recently re-evaluated whether I should continue making the game I took a few years off work to develop a while ago, which is mostly finished except for artwork / animations. I was unsure as to whether it was worth continuing with or whether it was just sunk cost keeping me going. After ignoring the sunk costs, and re-running the calculations, I decided it was worth continuing, and have since got in contact with several artists to get work on it rolling again.
Many individual changes, several of them big and important in their own right (see my Meta thread comment ), but the big one is just keeping track of things I want to do, portioning out ambitious goals to hit for particular days, managing to hit all or most of these, reviewing why I couldn’t hit them all when I didn’t and doing a weekly review of which of the bigger goals I should attempt to tackle over the week ahead.
For example, I’ve hit the income threshold where it’s cheaper for me to have private health insurance than not, so that was put on my list of things to do after finishing moving to Sydney, pulled off once I was set up, turned into goals over a few days of finding out more information in general, researching specific plans, picking one, and signing up for it. Each of these was just one bullet point of several on my list to accomplish for that day.
I’ve gone from being somewhat bored at home, casting around for videogames that inspired me to actually play them, about 3 months ago, to being occupied up until I go to bed every night of the week working on longer term projects for myself.
Taken the survey (would have loved to do digit ratio, but too difficult to get access to the equipment needed).