I would have a lot of trust in a vote. I seriously doubt we as a community would agree on a set of knower I would trust. Also some similar ideas have been tried and went horribly in at least some cases (ex the alumni dispute resolution council system). It is much harder for bad actors to subvert a vote than to subvert a small number of people.
It is commonly claimed that if you make it to ‘level N’ in your mathematics education you will only remember level n − 1 or level n-2 longterm. Obviously there is no canonical way to split knowlege into levels. But one could imagine a chain like:
2) Calculus (think AP calc in the USA)
3) Linear Algebra and Multi-variable Calculus
4) Basic ‘Analysis’ (roughly proofs of things in Calculus)
5) Measure Theory
6) Advanced Analysis topic X (ex Evan’s Partial Differential Equations)
This theory roughly fits my experience.
I don’t think Qiaochu’s comment is particularly low effort. He has been Berkeley for a long time and spoke about his experiences. Given that he shared his google doc with some people, the comment was probably constructive on net. Though I don’t think it was constructive to the conversation on lesswrong.
If someone posts a detailed thread describing how they want to do X, maybe people should hold off on posting ‘actually trying to do X is a bad idea’. Sometimes the negative comments are right. But lesswrong seems to have gone way too far in the direction of naysaying. As you point out, the top comments are often negative on even high effort posts by highly regarded community members. This is a big problem.
I would post much more on lesswrong if there was a ‘no nitpicking’ norm available.
(re-posted as a top level comment at Ray’s request)
I would post much more on lesswrong if there was a ‘no nitpicking’ norm available.
Lying about what? It is certainly common to blatantly lie when you want to cancel plans or decline an invitation. Some people think there should be social repurcussions for these lies. But imo these sorts of lies are, by default, socially acceptable.
There are complicated incentives around punishing deliberate manipulations and deception much harder than motivatated/unconcious manipulation and deception. In particular you are punishing people for being self aware. You can interpret ‘The Elephant in the Brain’ as record of the myriad ways people in somewhat, or more than somewhat, manipulative behavior. Motivated reasoning is endemic. A huge amount of behavior is largely motivated by local ‘monkey politics’ and status games. Learning about rationality might make a suffiently open minded and intelectually honest person aware of what they are often doing. But its not going to make them stop doing these things.
Imagine that people on average engage in 120 units of deception. 20 units of concious deception and 100 units of unconcious. People who take the self awareness pill engage in 40 units of concious deception and 0 units of unconcious deception. The later group engage in much less deception but they engage in twice as much ‘deliberate’ deception.
I have two main conclusions. First, I think seeing people, and yourself, clearly requires an increased tolerance for certain kinds of bad behavior. People are not very honest but cooperation is empirically possible. Ray commented this below: “If someone consciously lies* to me, it’s generally because there is no part of them that thinks it was important enough to cooperate with me”. I think that Ray’s comment is false. Secondly I think its bad to penalize ‘deliberate’ bad behavior so much more heavily. What is the point of penalizing deception? Presumably much of the point is to preserve the group’s ability to reason. Motivated reasoning and other forms of non-deleiberate deception and manipulation are arguably at least as serious a problem as blatant lies.
Even if Glenn is having a mental breakdown letting him continue to spam people on various forums is not helping him. In particular because he is currently burning a ton of social capital and cultivting a very negative reputation. At least he needs to take a break from public posting.
I think you should explain in substantially more detail why you think communities should do the opposite of following Eliezer’s advice.
If you bid 2$ you get at most 4$. If you bid 100$ you have a decent chance to get much more. If even 10% of people big ~100 and everyone else bids two you are better off bidding 100. Even in a 5% 100$ / 95% 2$ the two strategies ahve a similar expected value. In order for bidding 2$ to be a good strategy you have to assume almost everyone else will bid 2$.
If you can consistently get to work late enough I think the best time to go to sleep is around 1am. 1am is late enough you can be out until midnight and still have an hour to get home and go to sleep on time. Even if you are out very late and only get to bed by 2am you are only down an hour of sleep if you maintain your wakeup time. There is occasional social pressure to hang out substnatially past midnight but it is pretty rare.
For these reasons I go to bed at 1am and get up at 9am. Of course I don’t have to be at work until 10am. But if you can make this work its great to have a sleep schedule you can hold to without sacrafices socialization.
Ben Hoffman’s views on privacy are downstream of a very extreme world model. On http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy a person comments under the name ‘declaration of war’ and Ben says:
I was a little surprised to see someone else express opinions so similar to my true feelings here (which are stronger than my endorsed opinions), but they’re not me.
Here are two relevant quotes:
It’s not surprising if privacy has value for the person preserving it. It’s very surprising if it has social value.Trivially, information puts people in better positions to make decisions. If it doesn’t, it logically has to be due to their perverse behaviors.It seems self-evident that we are all MASSIVELY worse off because sexuality is somewhat shrouded in secrecy. If we don’t agree on that point, not regarding what happens on the margins, but regarding global policy, I simply consider you to be part of rape culture and possibly it would be immoral to blackmail you rather than simply exposing you unconditionally.
Another (in the context of sexuality and privacy)
Coordinated concealing information is always about perpetuating patterns of abuse.
Ben says his endorsed views are not this extreme but he certainly seems to have some extreme views about whether sharing more information is almost always good. His position on this is presumably downstream of how ‘perverse’ he thinks human society is. I personally think that it is pretty obvious that, in currently existing society, sharing more information is not almost always good for society. And that privacy is not primarily a way to prevent abuse.
A society with no privacy is essentially a society of perfect norm and law enforcement. I do not think that would be a good society. Ben and others presumably agree many current norms and laws are quite bad. But they also seem to think that in a world without privacy all norms and laws would become just. Perhaps the central crux is ‘in a world without privacy would laws and norms automatically become just?’.
I find many of the views you updated away from plausible and perhaps compelling. Given that I have found your wriitng compelling on other topics compelling. Given this I feel like I should update my confidence in my own beliefs. Based on the post I find it hard to model where you currently stand on some of these issues. For example you claim you don’t endorse the following:
The future might be net negative, because humans so far have caused great suffering with their technological progress and there’s no reason to imagine that this will change.
I certainly don’t think its obvious that average suffering will be higher in the future. But it also seems plausible to me that the future will be net negative. ‘The trendline will continue’ seems like a strong enough argument to find a net negative future plausible. Elsewhere in the article you claim that human’s weak preferences will eventually end factory farming and I agree with that. However new forms of suffering may develop. One could imagine strong competitive pressures rewarding agents that ‘negatively reinforce’ agents they simulate. There are many other ways things can go wrong. So I am genuinely unsure what you mean by the fact that you don’t endorse this claim anymore. Do you think it is implausible the future is net negative? Or have you just substantially reduced the probabality you assign to a net negative future?
Relatedly do you have any links on why you updated your opinion of professionalism? I should note I am not at all trying to nitpick this post. I am very interested in how my own views should update.
Language drift can introduce confusions but it also has advantages. The original definition of a concept is unlikely to be the most useful definition. It is good if words shift to the definitions that the community finds useful. Let me give an example.
Bostrom’s original defintion of ‘infohazard’ includes information that is dangerous in the wrong hands: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” However most people use infohazard to mean something like “information that is intrinsically harmful to those who have it or will cause them to do harm to others” (this is how it used in the SCP stories for example). As Taymon points out Bostrom didn’t distinguish between “things you don’t want to know” and “things you don’t want other people to know”.
I think the SCP definition is more useful. It’s probably actively good that the definition of infohazard has shifted over time. Insisting on Bostrom’s definition is usually just confusing things.
There are some standard answers to “an you rank animals by how bad eating them is?”. Here is Brian Tomasik’s ranking. The article goes into considerable detail and has a useful results table: How Much Direct Suffering is Caused by Different Animal Foods . Various people have proposed alternative ways to count, for example suffering/gram_protein, but this is the standard starting point.
Only a tiny minority of events are relevant to me. So I prefer they are not included.
I would strongly prefer word count. Word count is implemented uniformly accross sites and contexts. I also almost always take longer than the stated read time to actually read the post.
Its not obvious to me why some things feel constraining and some do not. For example you could say that ‘every country in the world has the death penalty for stepping in front of a moving bus’. Obviously transhumanists probably do feel constrained by a lack of technological solutions. But the bus death penalty just does not both me the way a human made law does.
My current rent is NYC is already as high as I can justify. But I would prefer to pay 1680 for a room in a house with a housekeeper than 1400 in a house without. I think $1400 is a reasonable price for a single room in many locations.
Its not clear to me this attitude is always optimal even if your only goal is to improve. The fundmaental question is ‘Is the information we get from finishing this match in X minutes greater than the information we would get by spending X minutes toward playing a new match?’.
If the endgame is relatively long and not particularly interesting just concede. We aren’t going to learn much from actually playing it out even if I am 2-5% to win.
Say we are practicing for a 1vs1 Terraforming Mars competition. On generation three you get out AI central and I don’t have a huge lead in other areas to compensate. I think its rational to concede here. Terraforming mars takes a long time to play out. Is not really clear how exactly you will beat me, but you will draw a ton of cards and kill me somehow. I doubt you need practice crushing someone with a normal draw when you have an active AI central.
In a game with substantial luck I think it matters what caliber of opponents you are expecting to play against. If you are anticipating playing vs people substantiall worse than you it can make sense to practice winning from ‘objectively lose’ positions. If you are substantially stronger than the opponent you actually can win. But if your expected opponents are capable and playing the game out will take awhile just concede.
Nevermind the fact it is phycologically unplesanat to play out almost certainly lost positions. So if you are playing for enjoyment its often rational to concede. Of course during a literal tournament match play it out til the end if you are playing to win. Though make sure you are not screwing yourself because of timer rules (for example not conceding a game of mtg quick enough can make it unlikely you can finish the bo3 match).
sidenote: I have also had quite alot of success playing games. Though I don’t really play competitive games anymore.