This post rings true to me based on my personal experience. If you buy into the logic of the original post (and I’m betting that at least its author does) you should consider reevaluating your views towards other filtration systems. Plenty of institutions engage in filtering besides corporations, and those institutions are subject to the same reporting bias and comforting lie bias that would apply here.
William_Quixote
@Yvain: To first order and generalizing from one data point, figure that Eliezer_2000 is demonstrably as smart and as knowledgeable as you can possibly get while still being stupid enough to try and charge full steam ahead into Unfriendly AI. Figure that Eliezer_2002 is as high as it gets before you spontaneously stop trying to build low-precision Friendly AI. Both of these are smart enough to be dangerous and not smart enough to be helpful… To put it briefly: There really is an upper bound on how smart you can be, and still be that stupid.
I think this line of argument should provide less comfort that it seems to. Firstly, intelligent people can meaningfully have different values. Not all intelligences value the same things and not all human intelligences value the same things. Some people might be willing to take more risk with other people’s lives than you. Example: Oil company executives. There is strong reason to believe they are very intelligent and effective; they seem to achieve their goals in the world with a higher frequency than most other groups. Yet they also seem more likely to take actions with high risks to third parties.
Second, an intelligent moral individual could be bound up in an institution which exerts pressure on them to act in a way that satisfies the institutions values rather than their own. It is commonly said (although I don’t have a source, so grain of salt needed) that some members of the Manhattan project were not Certain that the reaction would not just continue indefinitely. It seems plausible that some of those physicists might have been over what has been described as the “upper bound on how smart you can be, and still be that stupid.”
Things you should aim to learn in classes:
-You should take enough math so that you can take set theory and formal logic and understand both of them
-You should take enough analysis (or calculus) so that you can think intuitively about continuity and about limits. You should also be able to think about when it makes sense to be thinking about the slope of a curve or the area under a curve. (this bullet probably goes without saying for a physics major, but is included for readers other than the OP)
-You should take enough English (or other writing intensive classes) to be a solid writer
-You should learn at least 2 programing languages (you probably only need to learn one in class, the second will be manageable on your own once you have learned the first)
-You should learn enough literary theory that you can casually and intuitively identify the social and artistic practices involved in the creation and maintenance of false categories and similarly identify the social and artistic practices involved in creating and maintaining a sense of “naturalness” about practices which could and should be legitimately questioned
-You should take game theory (imagine a big star drawing attention to this one)
-You should take macroeconomics with calculus and microeconomics with calculus. Some schools offer intro versions without calculus. For optimal time allocation talk to whomever you need to talk to (Professor, advisor, Dean, department chair, etc) in order to skip these and go directly to the versions with calculus
-If a history professor has a good reputation for teaching, take at least one class about a time very different than your own. Realistically, any group of people more than 200 years back should seem crazy to you. A good red flag to identify poor history teachers quickly is if they ever use the word “we” to describe a group that includes themselves and people who died before they were born.
-If your school has a good film class (ask students), take it. This isn’t so much practically useful, but if you substantially improve your eye for film you will be able to get a lot more enjoyment out of film for the remainder of your life.
As someone who is a new user, I strongly agree with Alicorn.
More options don’t always make people better off, but seeing downvoted posts is an option that is actively useful for new users. One of my first comments initially got downvoted to −1, and on seeing this, I looked at other downvoted comments and was able to use what I learned to edit my post so it eventually got voted back into positive territory.
Mistake avoidance is worth learning and downvoted posts are helpful for this. I have benefited from looking at downvoted posts, and I have no reason to believe I’m atypical in this regard.
I’d love to have a way to move comments. If anyone’s willing to donate enough money, this site could hire a full-time programmer and have all kinds of amazing new features. Meanwhile the development resources just don’t exist.
How much would part-time or one-off single feature development work cost? If you are going to tell the public that a problem is easily solved with money, you should aim to give the public a sense of the problem’s scope.
On prior forums I have been on, attempts to split into a only some posters and all posters forums have ended badly.
When there are enouph high class posters, everything goes into the high class forum and the open forum collapses leaving no worthwhile “in” for new users. When there are too few high class users, everyone double posts to both forums in order to get discussion and you wind up with a functional 1 forum system except with lots of links and more burden and top level menus.
I have not seen an open / closed forum system with exactly the goldilocks number of high class users to maintain stable equilibria in both forums.
Question on posting norms: What is the community standard for opening a discussion thread about an issue discussed in the sequences? Are there strong norms regarding minimum / maximum length? Is formalism required, or frowned on, or just optional? Thanks
This wording may lose a few people, but it probably helps for many people as well. The core subject matter of rationality could very easily be dull or dry or “academic”. The tounge-in-cheek and occasionally outright goofy humor makes the sequences a lot more fun to read.
The tone may have costs, but not being funny has costs too. If you think back to college, more professors have students tune out by being boring than by being esoteric.
Rationality is hard to measure. If LW doesn’t make many people more successful in mundane pursuits but makes many people subscribe to the goal of FAI, that’s reason to suspect that LW is not really teaching rationality, but rather something else.
if prediction markets were legal, we could much more easily measure if LW helped rationality. Just ask people to make n bets or predictions per month and see 1) it they did better than the population average and 2) if they improved over time.
In fact, trying to get intrade legal in the US might be a very worthwhile project for just this reason ( beyond all the general social reasons to like prediction markets)
If other people have suggested this before, there may be enouph background support to make it worth following up on this idea.
When I get home from work, I will post in the discussion forum to see if people would be interested in working to legalize prediction markets ( like intrade) it the US.
[EDITED: shortly after making this post, I saw Gwern’s post above suggesting that an alternative like prediction book would be just as good. As a result I did not make a post about legalizing prediction markets and instead tried prediction book for a month and a half. After this trial, I still think that making a push to legalize predictions markets would be worthwhile]
I wonder how “playing devil’s advocate” fits into the epistemic hygiene / good cognitive citizenship world view.
On the one hand, it can reduce group think and broaden the range of areas considered. On the other hand, its called devil’s advocate because you are advocating what are presumably bad ideas. If they are advocated too well, or you are not ‘flagged’ as operating in the devil’s advocate role you might actually be spreading bad ideas.
I was thinking about this subject because I tend to slip into the devil’s advocate role in IRL conversations and was pondering if the fact that I spend a lot of time advocating ideas I don’t support might be epistemiclly harmful (or at least a low value use of time).
Edit: I distinguish this role in casual conversation from a more formal red team approach (which would be known to all team members and so not at risk of mistaking the motivation behind advocacy)
Thanks for the information. Though seeing how formal the original “devil’s advocate” was again makes me worry about the wisdom of doing the same informally. Searching for patterns, it seems like the lauded examples of this are all formal and well flagged.
Other voices would have other messages: maybe that the patient was a horrible person who deserved to die, or that the patient must complete some bizarre ritual or else doom everybody. There were relatively fewer voices saying “Hey, let’s go fishing!”
There may be a bit of a selection bias here. If the voices are saying “let’s go fishing” or “those shoes are so last season, you really ought to get a new pair” or even “better run disk defragmenter again just to be sure” the person probably does not wind up hospitalized.
There was an article a year or so back in the Times about people with mundane voices in their head forming a support network at it mentioned that this may be much more common than is often thought.
Voting as One-boxing
If Omega thinks you are the kind of person who one-boxes, you will find $1,000,000 in the one box. At this point, you could take two boxes and pick up a small additional reward, but if you are really the kind of person who one-boxes, you won’t do that. If you went for the minor utility pickup at the end, you would be a two-boxer and the million dollars wouldn’t’ have been there in the first place.
If parties think you are the kind of person who votes, they will care about your policy preferences. At this point, you could stay home and pick up a small additional reward, but if you are really the kind of person who votes, you won’t do that. If you went for the minor utility pickup at the end, you would be a non-voter and the parties wouldn’t care about your policy preferences in the first place.
I think that if you really buy into the one box arguments presented elsewhere on this site, you should be voting. (conditional on the assumption that you have significant policy preferences; if you don’t care either way, then there is nothing analogous to the million dollars)
Forgive me for being new to the site, but I’ve see this kind of writing
rirelbar jub’q ibgrq yrff guna sbegl bar ibgrq mreb, gur nirentr jbhyq or nyzbfg rknpgyl rvtug.
in several places. How is it translated back to readable English?
thanks!
This is a very powerful fact about cooperates. By deligating different authorities and by hiring people with different personalities into different departments a corporate can simultaneously be th kind of cooperative entity that cooperates on a one shot prisoners dilemma and the kind of greedy entity that can credibly claim to reject anything less than an 80-20 split in it’s favor in an ultimatum game.
I tend to view game strategies that lead to the best stable equilibrium as moral injunctions ( tit for tat, cooperate first) These are provable ( under certain assumptions) so I lean towards saying they are “real”
I tried this yesterday and found this to be helpful.
Edited