Completely coincidental—just a word I liked the sound of 10 years ago. It does fit in here rather well though.
luminosity
Me too.
Except I don’t read the post as endorsing testing whether religion is good or bad for X, but rather saying that if a simulation showed we’d be better off with religious beliefs we’d be better off adopting them. There’s a number of reasons why this seems like a bad idea:
First, your simulation can only predict whether religion is better for you under the circumstances that you simulate. For instance, suppose you run a simulation of the entire planet earth, but nothing much beyond it. The simulation shows better outcomes for you with religious faith. You then modify yourself to have religious faith. An extinction level asteroid comes hurtling towards earth. Previously, you would have tried to work out the best strategy to divert its course. Now you sit and pray that it goes off course instead.
Let’s say you simulate yourself for twenty years under various religious beliefs, and under athiesm, and one of the simulations leads to a better outcome. You alter yourself to adopt this faith. You’ve now poisoned your ability to conduct similar tests in future. Perhaps a certain religion is better for the first year, or five years, or five thousand. Perhaps beyond that time it no longer is. Because you have now altered your beliefs to rest upon faith rather than upon testing you can no longer update yourself out of this religious state because you will no longer test to see whether it is optimal.
Terry Pratchett has a good metaphor for a good way of thinking in his Tiffany Aching books. Second, third, etc thoughts. Basically the idea that you shouldn’t just trust whatever your thoughts say, you have your second thoughts monitoring them. And then you have your third thoughts monitoring that. I’ve always found it extremely helpful to pay attention to what I’m thinking; many times I’ve noticed a thought slipping past that is very obviously wrong and picked myseld up on it. A few times I’ve even agreed with the original thought upon further analysis, but analysis that it definitely needed.
I’m not sure the second thoughts metaphor is the best way for explaining it to people mind you. Perhaps something such as “Pay attention to what you’re thinking, you’ll be surprised how often you disagree with your own first thought.”
I feel that perhaps you haven’t considered the best way to maximise your chance of developing Friendly AI if you were Eliezer Yudkowsky; your perspective is very much focussed on how you see it lookin in from the outside. Consider for a moment that you are in a situation where you think you can make a huge positive impact upon the world, and have founded an organisation to help you act upon that.
Your first, and biggest problem is getting paid. You could take time off to work on attaining a fortune through some other means but this is not a certain bet, and will waste years that you could be spending working on the problem instead. Your best bet is to find already wealthy people who can be convinced that you can change the world, that it’s for the best, and that they should donate significant sums of money to you, unless you believe this is even less certain than making a fortune yourself. There’s already a lot of people in the world with the requisite amount of money to spare. I think seeking donations is the more rational path.
Now, given that you need to persuade people of the importance of your brilliant new idea which no one has really been considering before, and that to most people isn’t at all an obvious idea. Is the better fund seeking strategy to admit to people that you’re uncertain if you’ll accomplish it, and compound that on top of their own doubts? Not really. Confidence is a very strong signal that will help you persuade people that you’re worth taking seriously. You asking Eliezer to be more publically doubtful probably puts him in an awkward situation. I’d be very surprised if he doesn’t have some doubts, maybe he even agrees with you, but to admit to these doubts would be to lower the confidence of investors in him, which would then lower further the chance of him actually being able to accomplish his goal.
Having confidence in himself is probably also important, incidentally. Talking about doubts would tend to reinforce them, and when you’re embarking upon a large and important undertaking, you want to spend as much of your mental effort and time as possible on increasing the chances that you’ll bring the project about, rather than dwelling on your doubts and wasting mental energy on motivating yourself to keep working.
So how to mitigate the problem that you might be wrong without running into these problems? Well, he seems ot have done fairly well here. The SIAI has now grown beyond just him, giving further perspectives he can draw upon in his work to mitigate any shortcomings in his own analyses. He’s laid down a large body of work explaining the mental processes he is basing his approaches on, which should be helpful both in recruitment for SIAI, and in letting people point out flaws or weaknesses in the work he is doing. Seems to me so far he has laid the groundwork out quite well, and now it just remains to see where he and the SIAI go from here. Importantly, the SIAI has grown to the point where even if he is not considering his doubts strongly enough, even if he becomes a kook, there are others there who may be able to do the same work. And if not there, his reasoning has been fairly well laid out, and there is no reason others can’t follow their own take on what needs to be done.
That said, as an outsider obviously it’s wise to consider the possibility that SIAI will never meet its goals. Luckily, it doesn’t have to be an either/or question. Too few people consider existential risk at all, but those of us who do consider it can spread ourselves over the different risks that we see. To the degree which you think Eliezer and the SIAI are on the right track, you can donate a portion of your disposable income to them. To the extent that you think other types of existential risk prevention matter, you can donate a portion of that money to the Future of Humanity Institute, or other relevant existential risk fighting organisation.
Like many others, my thinking is also internal discussions, whether with just one voice or multiple. The interesting thing to me is that while in my mind these ‘discussions’ feel complete and like regular conversation, when it comes to verbalise them I find quite often there’s huge and often unjustifiable gaps and leaps in the thinking. If discussing a problem with a colleague, I’ll often find that either the answer is obvious, or often that I need to take more time to come up with a coherent way of explaining it.
I don’t do it often enough, but often I try to safeguard against this when coming to conclusions by forcing myself to say them aloud.
A little nit-picky, but:
A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.
Without a source these figures seem to imply a precision that you don’t back up. Are you really so confident that an AI of this level of intelligence will exist? I feel your point would be stronger by removing the implied precision. Perhaps:
A friendly singularity would likely produce a superintelligence capable of mastering nanotechnology.
I’ve long been a critic of experience point / levelling systems in RPGs because of this. They optimise for wanting to be a sociopath. The guy who slaughters everything possible becomes the most powerful. I found Vampire: Bloodlines an interesting alternative, in that you were rewarded skill points for finishing quests, and you’d get the same reward whether you slaughtered everyone, snuck through, or any other way of solving the problem.
As for side quests, I guess the problem is that the developers spend an enormous amount of time generating them all, and don’t want to see that time as essentially wasted, especially since a large number of people don’t do them anyway. Considering just how expensive a modern AAA game has become to create, it’s hard to imagine you could persuade RPG developers to punish people for undertaking side quests, even if it does lead to the ridiculous situations where you’re supposedly racing against time to save the world/galaxy/universe, but have time to help every kitten stuck in a tree on the way.
Deus Ex is the last good example I can think of, of a game immersive in this sense. Depending on how the prequel goes, it might not be dead just yet.
Edit: As pointed out downthread, there are of course Bethesda’s RPGs too.
Interestingly, some of the best mathematical analysis I’ve ever seen happens in WoW, and to a limited extent in other MMOs. When you want to be the top 25, 100 or even 1000 out of 13 million, you need to squeeze out every advantage you can. Often the people testing game mechanics have a better understanding of them than the game designers. Similarly, the first people to defeat new bosses do so because they have a group of people they can depend upon, but also because they have several people capable of analysing boss abilities, and iterating through different stategies until they find one that works.
It’s unfortunate that there’s so much sharing in the community; players who aren’t striving to be the first to finish a fight can just obtain strategies from other people. People who don’t care to analyse gameplay changes, or new items, can rely upon those who do to tell them what to wear, what abilities to choose, and what order to use them in. Back when I played one of my biggest frustrations was that nearly everybody in the game out of the few top thousand simply lack the ability to react and strategise on the fly. Throw an unexpect situation at them and maybe 1 in 10 will cope with it.
If you’re not aware of Jane McGonigal you might be interested in her works. Her basic position is that games are better than reality, mostly because they have a far superior feedback system. She tries to apply game design to the real world to stimulate people’s problem solving.
What particular biases are you worried about karma affecting? At first thought, I see more reasons why karma would be benficial than not. For instance, someone who proposes many ideas that don’t work, and won’t update on that evidence would be expected to get a low karma. New people to the community can see at a glance that following their advice is substantially less likely to be valuable than following the advice of someone else. Indeed, following particularly poor advice could easily be harmful, so having a warning would be very important.
In regards to 1, while I think a sub-lesswrong would work alright, I do think you’d either want separate karma scores for the sites, or to have a separate site based on the same architecture. I don’t think it’s too controversial to suggest that people can do well on less wrong without having great social skills, likewise the advice of people who are accomplished socially might not carry over into great less wrong advice.
I don’t disagree much with your post (my only complaint is that fun is a reasonable goal in and of itself, and if someone chooses that, then so be it). However my objection is to Blow’s (amongst many others’) characterisation of the game and the players. Contrary to his thesis, being smart and adept are actually massively rewarded in WoW by comparison to other games; nearly everybody who plays the game is aware of the best players. There is a lot of status up for grabs just by being the best on a server, let alone best in the world.
Accepting his analysis at its face value would lead you to conclude that there are no lessons you can take from WoW or other MMOs. In fact, to me WoW demonstrates ways in which people can be motivated to work upon hard, mathematical problems. It would be a shame if people were to dismiss it off hand, when it has the potential to demonstrate how to structure hard work to make it more palatable and attractive to tackle.
I suppose it would depend on the makeup of your particular server. Though we were nowhere near world best, my guild had decent competition on our server and there was always need to strive to be the first to win an encounter. Both groups were reasonably well known on the server, and I would reasonably often have people messaging me out of the blue.
To try to generalise the post a bit better, I think the lesson from this is that to encourage rational analysis and quick thinking in important areas it’s important to have good competition, an easily verified criteria for ‘winning’, preferably milestones towards the ultimate goal, and a reward for winning whether status or monetary. Off the top of my head, the people behind the X-Prizes seem to have used this model well to encourage innovation in select areas.
If you’re getting drowsy in lectures wouldn’t you be better off either arriving at lectures better rested, or if you already are and the presenter bores you, learning information in another way? When I went to university, lecturers would get two weeks’ trial to prove that their lectures were worth attending. If they weren’t, I just read the syllabus, and would study the material from a textbook or the internet during the time allocated for the lecture.
It’s rather unfortunate that the majority of lectures were thus avoided, but better to use the allocated learning time optimally.
Details sound good to me. I’m generally open most times for meet ups if it would need to be moved. Budget is a little tight, so my only real problem with meet ups would be expensive venues.
Thanks for the offer. That sounds very affordable though, I’ll be fine.
I for one am at least interested in the concept. Whether individual posts would be worthwhile or not is another matter. May I suggest you use the open threads to provide a first cut at topics you mean to address, refine it based on feedback, then post it and see how it goes? Remember that top level posts need a certain karma level to appear on the front page, so if the community doesn’t like the post, you won’t be pushing other topics off the front page.
If the backlash would be great against a top level post, you should be able to ascertain that from the open thread first.
Hi there,
My name is Lachlan, 25 years old, and I too am a computer programmer. I found less wrong via Eliezer’s site; having been linked there by a comment on Charles Stross’s blog, if I recall correctly.
I’ve read through a lot of the LW backlog and generally find it all very interesting, but haven’t yet taken the time and effort to try to apply the useful seeming guidelines to my life and evaluate the results. I blame this on having left my job recently, and feeling that I have enough change in my life right now. I worry that this excuse will metamorphose into another though, and become a pattern of not examining my thinking as best as possible.
All that said, I do often catch myself thinking thoughts that on examination don’t hold up, and re-evaluating them. The best expression of this that I’ve seen is Pratchett’s first, second, third thoughts.