I think you should explain in substantially more detail why you think communities should do the opposite of following Eliezer’s advice.
If you bid 2$ you get at most 4$. If you bid 100$ you have a decent chance to get much more. If even 10% of people big ~100 and everyone else bids two you are better off bidding 100. Even in a 5% 100$ / 95% 2$ the two strategies ahve a similar expected value. In order for bidding 2$ to be a good strategy you have to assume almost everyone else will bid 2$.
If you can consistently get to work late enough I think the best time to go to sleep is around 1am. 1am is late enough you can be out until midnight and still have an hour to get home and go to sleep on time. Even if you are out very late and only get to bed by 2am you are only down an hour of sleep if you maintain your wakeup time. There is occasional social pressure to hang out substnatially past midnight but it is pretty rare.
For these reasons I go to bed at 1am and get up at 9am. Of course I don’t have to be at work until 10am. But if you can make this work its great to have a sleep schedule you can hold to without sacrafices socialization.
Ben Hoffman’s views on privacy are downstream of a very extreme world model. On http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy a person comments under the name ‘declaration of war’ and Ben says:
I was a little surprised to see someone else express opinions so similar to my true feelings here (which are stronger than my endorsed opinions), but they’re not me.
Here are two relevant quotes:
It’s not surprising if privacy has value for the person preserving it. It’s very surprising if it has social value.Trivially, information puts people in better positions to make decisions. If it doesn’t, it logically has to be due to their perverse behaviors.It seems self-evident that we are all MASSIVELY worse off because sexuality is somewhat shrouded in secrecy. If we don’t agree on that point, not regarding what happens on the margins, but regarding global policy, I simply consider you to be part of rape culture and possibly it would be immoral to blackmail you rather than simply exposing you unconditionally.
Another (in the context of sexuality and privacy)
Coordinated concealing information is always about perpetuating patterns of abuse.
Ben says his endorsed views are not this extreme but he certainly seems to have some extreme views about whether sharing more information is almost always good. His position on this is presumably downstream of how ‘perverse’ he thinks human society is. I personally think that it is pretty obvious that, in currently existing society, sharing more information is not almost always good for society. And that privacy is not primarily a way to prevent abuse.
A society with no privacy is essentially a society of perfect norm and law enforcement. I do not think that would be a good society. Ben and others presumably agree many current norms and laws are quite bad. But they also seem to think that in a world without privacy all norms and laws would become just. Perhaps the central crux is ‘in a world without privacy would laws and norms automatically become just?’.
I find many of the views you updated away from plausible and perhaps compelling. Given that I have found your wriitng compelling on other topics compelling. Given this I feel like I should update my confidence in my own beliefs. Based on the post I find it hard to model where you currently stand on some of these issues. For example you claim you don’t endorse the following:> The future might be net negative, because humans so far have caused great suffering with their technological progress and there’s no reason to imagine that this will change.
I certainly don’t think its obvious that average suffering will be higher in the future. But it also seems plausible to me that the future will be net negative. ‘The trendline will continue’ seems like a strong enough argument to find a net negative future plausible. Elsewhere in the article you claim that human’s weak preferences will eventually end factory farming and I agree with that. However new forms of suffering may develop. One could imagine strong competitive pressures rewarding agents that ‘negatively reinforce’ agents they simulate. There are many other ways things can go wrong. So I am genuinely unsure what you mean by the fact that you don’t endorse this claim anymore. Do you think it is implausible the future is net negative? Or have you just substantially reduced the probabality you assign to a net negative future?
Relatedly do you have any links on why you updated your opinion of professionalism? I should note I am not at all trying to nitpick this post. I am very interested in how my own views should update.
Language drift can introduce confusions but it also has advantages. The original definition of a concept is unlikely to be the most useful definition. It is good if words shift to the definitions that the community finds useful. Let me give an example.
Bostrom’s original defintion of ‘infohazard’ includes information that is dangerous in the wrong hands: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” However most people use infohazard to mean something like “information that is intrinsically harmful to those who have it or will cause them to do harm to others” (this is how it used in the SCP stories for example). As Taymon points out Bostrom didn’t distinguish between “things you don’t want to know” and “things you don’t want other people to know”.
I think the SCP definition is more useful. It’s probably actively good that the definition of infohazard has shifted over time. Insisting on Bostrom’s definition is usually just confusing things.
There are some standard answers to “an you rank animals by how bad eating them is?”. Here is Brian Tomasik’s ranking. The article goes into considerable detail and has a useful results table: How Much Direct Suffering is Caused by Different Animal Foods . Various people have proposed alternative ways to count, for example suffering/gram_protein, but this is the standard starting point.
Only a tiny minority of events are relevant to me. So I prefer they are not included.
I would strongly prefer word count. Word count is implemented uniformly accross sites and contexts. I also almost always take longer than the stated read time to actually read the post.
Its not obvious to me why some things feel constraining and some do not. For example you could say that ‘every country in the world has the death penalty for stepping in front of a moving bus’. Obviously transhumanists probably do feel constrained by a lack of technological solutions. But the bus death penalty just does not both me the way a human made law does.
My current rent is NYC is already as high as I can justify. But I would prefer to pay 1680 for a room in a house with a housekeeper than 1400 in a house without. I think $1400 is a reasonable price for a single room in many locations.
Its not clear to me this attitude is always optimal even if your only goal is to improve. The fundmaental question is ‘Is the information we get from finishing this match in X minutes greater than the information we would get by spending X minutes toward playing a new match?’.
If the endgame is relatively long and not particularly interesting just concede. We aren’t going to learn much from actually playing it out even if I am 2-5% to win.
Say we are practicing for a 1vs1 Terraforming Mars competition. On generation three you get out AI central and I don’t have a huge lead in other areas to compensate. I think its rational to concede here. Terraforming mars takes a long time to play out. Is not really clear how exactly you will beat me, but you will draw a ton of cards and kill me somehow. I doubt you need practice crushing someone with a normal draw when you have an active AI central.
In a game with substantial luck I think it matters what caliber of opponents you are expecting to play against. If you are anticipating playing vs people substantiall worse than you it can make sense to practice winning from ‘objectively lose’ positions. If you are substantially stronger than the opponent you actually can win. But if your expected opponents are capable and playing the game out will take awhile just concede.
Nevermind the fact it is phycologically unplesanat to play out almost certainly lost positions. So if you are playing for enjoyment its often rational to concede. Of course during a literal tournament match play it out til the end if you are playing to win. Though make sure you are not screwing yourself because of timer rules (for example not conceding a game of mtg quick enough can make it unlikely you can finish the bo3 match).
sidenote: I have also had quite alot of success playing games. Though I don’t really play competitive games anymore.
I think its very confusing to call d = 0.2 to 0.5 ‘small’, especially in the context of a 4 day workshop. Imagine the variable is IQ. Then a ‘small’ effect increases iq by 3 to 7.5 points. That boost in iq would be much better described as ‘huge’. However IQ has a relatively large standard deviation compared to its mean (roughly 15 and 100).
Lets look at male height. In the USA male height has a mean around 70 inches and a standard deviation around 4 inches. (Note 4⁄70 is 38% of 15⁄100). A d of 0.2 to 0.5 would correspond to an increase in height of 0.8 to 2 inches. Some people are willing to undergo costly, time consuming and painful length lengthening surgery to gain 4-5 inches of height. If a four day, 4000 dollar workshop gave increased your height by 0.8 to 2 inches millions of men would be on the waiting list. I know I would be. That doesnt really sound ‘small’ to me.
As an aside I dont understand why CEA or the community building fund wont give any money to these projects.
The question is, who counts as terrible? What sorts of lapses in rigorous thinking are just normal human fallibility and which make a person seriously untrustworthy?
If at all possible you need to look at the person’s actual track record. Everyone has views you will find incredible stupid or immoral. Even the very wise make mistakes that look obvious to us. In addition its possible that the person engaging in ‘obvious folly’ actually has a better understanding of the situation than we do. You need to look at a representiive sample and weigh their successes and failures in a systematic way. If you cannot access their history you still need to get an actual sample. If you were judging programmers something like a triplebyte interview is a reasonable way to get info. Trying to weigh the stupid things they have said about programming is a very bad method. Without a real sample you are making a character judgement under huge uncertainty.
Of course we are Bayesians. If forced to come up with an estimate despite uncertainty we can do it. But its important to do the updating correctly. Say a person’s stupidest belief, that you know about, is X. The relevant odds ratio is not:
P(beleives X| trustworthy)/P(beleives X|untrustorthy)
Instead you have to look at:
P(stupidest belief I learn about is at least as stupid as X| trustworthy)/P(stupidest beleif I learn about is at least as stupid as X|untrustowrthy)
You can try to estimate similar odds ratios for collections of stupid beleifs. This method isnt as good as trying to conditioning on both unusually wise and unusually stupid beliefs. But if you are going to judge based on stupid beliefs you have to do it correctly. Keep in mind that the more ‘open’ a person is the more likely you are to learn their stupid beleifs. So you need to facor in an estimate of their openness towards you.
Would be useful to me
I wonder if we are past the tipping point. If someone’s main social group is rationalists I am not sure it makes sense not to live in the Bay. You will lose too many friends over time. And maintaining long term social connections is very important. I think the unfortunate situation is that non-Bay communities have to be be staffed by people who dislike the bay culture, dont consider rationalsits their primary social group or have strong reasons for living in a particular city (for example they work in finance and alot of the jobs are in NYC). I think this situation is problematic, mostly for reasons you outlined. But there isn’t going to be a coordinated effort to reverse the trend of people moving to the Bay. And there are certainly benefits of having people concentrated. I also agree that the schelling point had to be the Bay, the silicon valley money was too important given the communities goals and deamgraphics.
Zvi posted the following comment:
Yeah, I likely should have been more explicit about the whole ‘the ones who are any good already got hired’ thing. Which has the same implication, of course – that if you can simply display what we’d instinctively think of as ordinary competence, you’ll get hired reasonably quickly once you start putting in effort. Which matches my experience on both sides.
To put it mildly, the above does not match my experience at all. And I know a ton of rationalist programmers having trouble finding a job. These people are usually not super expereinced and didnt go to the ICPC final. But they certainly seem at least ‘ordinarily competent’.
Same experience. I was applying for software jobs for what its worth.