Retrospectively I essentially always regret accept Chesterton’s Fence type arguments. Overall I think the meme has been quite harmful to me. At the very least it caused me to lose a lot of time.
I loudly promote a large number of rather contentious ideas. In particular, I am an animal right hardliner (an active member of Direct Action Everywhere) and a socialist top of the big rationalist stereotypes (singularity is near, poly, etc). I certainly annoy a lot of people but socially I am doing well. I have many friends, an amazing long term relationship, and am doing well financially. You can read my blog to see the sort of beliefs I promote.
It is unclear why this works out for me. I look rather average which might help? Plausible I have some sort of social skills that help me smooth things over if they get too hot. I handle conflict fairly well. It seems empirically true many people are socially successful despite having extremely controversial. In some cases it seems to help them?
Most people, including most lesswrong readers, are not top AI experts. Nor will they be able to become one quickly.
I wound up doing something similar to this:
ARKQ − 27%
Botz − 9%
Microsoft − 9%
Amazon − 9%
Alphabet − 8% (ARKQ is ~4% alphabet)
Facebook − 7%
Tencent − 6%
Baidu − 6%
Apple − 5%
IBM − 4%
Tesla − 0 (ArkQ is 10% Tesla)
Nvidia − 2% (both Botz and ARKQ hold Nvidia)
Intel − 3%
Salesforce − 2%
Twilio − 1.5%
Alteryx − 1.5%
BOTZ and ARKQ are ETFs. They have pretty high expense ratios. You can replicate them if you want to save 68-75 basis points. Botz is pretty easy to replicate with only ~10K.
I would have a lot of trust in a vote. I seriously doubt we as a community would agree on a set of knower I would trust. Also some similar ideas have been tried and went horribly in at least some cases (ex the alumni dispute resolution council system). It is much harder for bad actors to subvert a vote than to subvert a small number of people.
It is commonly claimed that if you make it to ‘level N’ in your mathematics education you will only remember level n − 1 or level n-2 longterm. Obviously there is no canonical way to split knowlege into levels. But one could imagine a chain like:
2) Calculus (think AP calc in the USA)
3) Linear Algebra and Multi-variable Calculus
4) Basic ‘Analysis’ (roughly proofs of things in Calculus)
5) Measure Theory
6) Advanced Analysis topic X (ex Evan’s Partial Differential Equations)
This theory roughly fits my experience.
I don’t think Qiaochu’s comment is particularly low effort. He has been Berkeley for a long time and spoke about his experiences. Given that he shared his google doc with some people, the comment was probably constructive on net. Though I don’t think it was constructive to the conversation on lesswrong.
If someone posts a detailed thread describing how they want to do X, maybe people should hold off on posting ‘actually trying to do X is a bad idea’. Sometimes the negative comments are right. But lesswrong seems to have gone way too far in the direction of naysaying. As you point out, the top comments are often negative on even high effort posts by highly regarded community members. This is a big problem.
I would post much more on lesswrong if there was a ‘no nitpicking’ norm available.
(re-posted as a top level comment at Ray’s request)
I would post much more on lesswrong if there was a ‘no nitpicking’ norm available.
Lying about what? It is certainly common to blatantly lie when you want to cancel plans or decline an invitation. Some people think there should be social repurcussions for these lies. But imo these sorts of lies are, by default, socially acceptable.
There are complicated incentives around punishing deliberate manipulations and deception much harder than motivatated/unconcious manipulation and deception. In particular you are punishing people for being self aware. You can interpret ‘The Elephant in the Brain’ as record of the myriad ways people in somewhat, or more than somewhat, manipulative behavior. Motivated reasoning is endemic. A huge amount of behavior is largely motivated by local ‘monkey politics’ and status games. Learning about rationality might make a suffiently open minded and intelectually honest person aware of what they are often doing. But its not going to make them stop doing these things.
Imagine that people on average engage in 120 units of deception. 20 units of concious deception and 100 units of unconcious. People who take the self awareness pill engage in 40 units of concious deception and 0 units of unconcious deception. The later group engage in much less deception but they engage in twice as much ‘deliberate’ deception.
I have two main conclusions. First, I think seeing people, and yourself, clearly requires an increased tolerance for certain kinds of bad behavior. People are not very honest but cooperation is empirically possible. Ray commented this below: “If someone consciously lies* to me, it’s generally because there is no part of them that thinks it was important enough to cooperate with me”. I think that Ray’s comment is false. Secondly I think its bad to penalize ‘deliberate’ bad behavior so much more heavily. What is the point of penalizing deception? Presumably much of the point is to preserve the group’s ability to reason. Motivated reasoning and other forms of non-deleiberate deception and manipulation are arguably at least as serious a problem as blatant lies.
Even if Glenn is having a mental breakdown letting him continue to spam people on various forums is not helping him. In particular because he is currently burning a ton of social capital and cultivting a very negative reputation. At least he needs to take a break from public posting.
I think you should explain in substantially more detail why you think communities should do the opposite of following Eliezer’s advice.
If you bid 2$ you get at most 4$. If you bid 100$ you have a decent chance to get much more. If even 10% of people big ~100 and everyone else bids two you are better off bidding 100. Even in a 5% 100$ / 95% 2$ the two strategies ahve a similar expected value. In order for bidding 2$ to be a good strategy you have to assume almost everyone else will bid 2$.
If you can consistently get to work late enough I think the best time to go to sleep is around 1am. 1am is late enough you can be out until midnight and still have an hour to get home and go to sleep on time. Even if you are out very late and only get to bed by 2am you are only down an hour of sleep if you maintain your wakeup time. There is occasional social pressure to hang out substnatially past midnight but it is pretty rare.
For these reasons I go to bed at 1am and get up at 9am. Of course I don’t have to be at work until 10am. But if you can make this work its great to have a sleep schedule you can hold to without sacrafices socialization.
Ben Hoffman’s views on privacy are downstream of a very extreme world model. On http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy a person comments under the name ‘declaration of war’ and Ben says:
I was a little surprised to see someone else express opinions so similar to my true feelings here (which are stronger than my endorsed opinions), but they’re not me.
Here are two relevant quotes:
It’s not surprising if privacy has value for the person preserving it. It’s very surprising if it has social value.Trivially, information puts people in better positions to make decisions. If it doesn’t, it logically has to be due to their perverse behaviors.It seems self-evident that we are all MASSIVELY worse off because sexuality is somewhat shrouded in secrecy. If we don’t agree on that point, not regarding what happens on the margins, but regarding global policy, I simply consider you to be part of rape culture and possibly it would be immoral to blackmail you rather than simply exposing you unconditionally.
Another (in the context of sexuality and privacy)
Coordinated concealing information is always about perpetuating patterns of abuse.
Ben says his endorsed views are not this extreme but he certainly seems to have some extreme views about whether sharing more information is almost always good. His position on this is presumably downstream of how ‘perverse’ he thinks human society is. I personally think that it is pretty obvious that, in currently existing society, sharing more information is not almost always good for society. And that privacy is not primarily a way to prevent abuse.
A society with no privacy is essentially a society of perfect norm and law enforcement. I do not think that would be a good society. Ben and others presumably agree many current norms and laws are quite bad. But they also seem to think that in a world without privacy all norms and laws would become just. Perhaps the central crux is ‘in a world without privacy would laws and norms automatically become just?’.
I find many of the views you updated away from plausible and perhaps compelling. Given that I have found your wriitng compelling on other topics compelling. Given this I feel like I should update my confidence in my own beliefs. Based on the post I find it hard to model where you currently stand on some of these issues. For example you claim you don’t endorse the following:
The future might be net negative, because humans so far have caused great suffering with their technological progress and there’s no reason to imagine that this will change.
I certainly don’t think its obvious that average suffering will be higher in the future. But it also seems plausible to me that the future will be net negative. ‘The trendline will continue’ seems like a strong enough argument to find a net negative future plausible. Elsewhere in the article you claim that human’s weak preferences will eventually end factory farming and I agree with that. However new forms of suffering may develop. One could imagine strong competitive pressures rewarding agents that ‘negatively reinforce’ agents they simulate. There are many other ways things can go wrong. So I am genuinely unsure what you mean by the fact that you don’t endorse this claim anymore. Do you think it is implausible the future is net negative? Or have you just substantially reduced the probabality you assign to a net negative future?
Relatedly do you have any links on why you updated your opinion of professionalism? I should note I am not at all trying to nitpick this post. I am very interested in how my own views should update.
Language drift can introduce confusions but it also has advantages. The original definition of a concept is unlikely to be the most useful definition. It is good if words shift to the definitions that the community finds useful. Let me give an example.
Bostrom’s original defintion of ‘infohazard’ includes information that is dangerous in the wrong hands: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” However most people use infohazard to mean something like “information that is intrinsically harmful to those who have it or will cause them to do harm to others” (this is how it used in the SCP stories for example). As Taymon points out Bostrom didn’t distinguish between “things you don’t want to know” and “things you don’t want other people to know”.
I think the SCP definition is more useful. It’s probably actively good that the definition of infohazard has shifted over time. Insisting on Bostrom’s definition is usually just confusing things.