Adam Zerner
I’ve had it disabled for a while now. Thanks for thinking of me though :)
I’m not sure if this addresses all of the things you’re saying. If not, let me know.
I’m not claiming that all or even most rationalists actually are successful in leaning closer to Real Rationality than Hollywood Rationality. I’m claiming that a very large majority 1) endorse and 2) aspire towards the former rather than the latter.
Incremental Progress and the Valley talks about the relationship between rationality and winning. In short, what the post says and what I think the majority opinion amongst rationalists is, is that in the long run it does bring you closer to winning, but 1) a given step forward towards being more rational sometimes moves you a step back in towards on winning rather than forward, and 2) we’re not really at the point in our art where it leads to a sizeable increase in winning.
As for convincing people about the threat of AI:
1) I don’t thing the art of rationality has spent too much time on persuasion, compared to, say, probability theory.
2) I think there’s been some amount of effort put towards persuasion. People reference Influence: The Psychology of Persuasion by Robert Cialdini a fair bit).
3) People very much care about anything even remotely relevant to lowering the chance of unfriendly AI or any other existential risk and will be extremely open to any ideas you or others have on how to do better.
4) There very well might be some somewhat low hanging fruit in terms of being better at persuading others in the context of AI risk.
5) Convincing people about the importance is a pretty difficult thing, and so lack of success very well might be more about difficulty than competence.
I am sensing some implicit if not explicit claims that rationalists believe in Hollywood Rationality instead of Actual Rationality. To be clear, that is untrue.
Ah, I see. That makes sense and changes my mind about what the psychiatrist probably meant. Thanks.
(Although it begs the new complaint of “I’m asking because I want confirmation not moderate confidence and you’re the professional who is supposed to provide the confirmation to me”, but that’s a separate thing.)
I’m curious about the downvotes here. Is this an implausible hypothesis?
I’ve gotta vent a little about communication norms.
My psychiatrist recommended a new drug. I went to take it last night. The pills are absolutely huge and make me gag. But I noticed that the pills look like they can be “unscrewed” and the powder comes out.
So I asked the following question (via chat in this app we use):
For the NAC, the pill is a little big and makes me gag. Is it possible to twist it open and pour the powder on my tongue? Or put it in water and drink it?
The psychiatrist responded:
Yes it seems it may be opened and mixed into food or something like applesauce
The main thing I object to is the language “it seems”. Instead, I think “I can confirm” would be more appropriate.
I think that it is—here and frequently elsewhere—a mott-and-bailey. The bailey being “yes, I confirm that you can do this” and the mott being “I didn’t say it’d definitely be ok, just that it seems like it’d be ok”.
Well, that’s not quite right. I think it’s more subtle than that. If consuming the powder lead to issues, I do think the psychiatrist would take responsibility, and be held responsible if there any sort of legal thing, despite the fact that she used the qualifier “it seems”. So I don’t think that she consciously was trying to establish a motte that she can retreat to if challenged. Rather, I think it is more subconscious and habitual.
This seems like a bad epistemic habit though. Or, perhaps I should say, I’m pretty confident that it is a bad epistemic habit. I guess I have some work to do in countering it as well.
Here’s another example. I listen to the Thinking Basketball podcast. I notice that the cohost frequently uses the qualifier “necessarily”. As in, “Myles Turner can’t necessarily create his own shot”. What he means by that is “Myles Turner isn’t very good at creating his own shot”. This too I think is mostly habitual and subconscious, as opposed to being a conscious attempt to establish a motte that he can retreat to.
My best guess (~70%?) is that it’s actually just an urge. Like, “Ooo, ooo, that reminds me of X!”. Which leads to the person proceeding to talk about X. Which leads to someone else being reminded of Y, and proceeding to talk about Y.
I don’t think there’s much dominance signaling or emotional “sending”. At least not intentionally and consciously.
Some reasons why I frequently prefer communicating via text
Ah yes! That’s really helpful. I just realized that that’s a big part of what’s happening with me as well. Thanks!
where the poster does not seem to fully know what they are talking about and should have done more homework first
To me it seems appropriate to post about things that you don’t know too much about, as long as you make a reasonable effort to communicate where your confidence is. Ie. if you say “here are some thoughts about X; I am just spitballing and don’t know too much about X”, that seems fine.
What do you think? I suspect you also think it’s fine and are moreso trying to point at confident assertions about things the author doesn’t actually know much about.
I’ve never done so explicitly. Well, one time I did for a few weeks. I was doing great with most of my predictions being pretty accurate, and then a handful of them turned out to wildly undershoot (ie. the task took way longer than I expected), I lost motivation, and stopped.
Overall though, if I think back to my assessments over the years and how accurate they turned out to be, I think I lean slightly towards being too overconfident.
When I think about problems like these, I use what feels to me like a natural generalization of the economic idea of efficient markets. The goal is to predict what kinds of efficiency we should expect to exist in realms beyond the marketplace, and what we can deduce from simple observations. For lack of a better term, I will call this kind of thinking inadequacy analysis.
I think this is pretty applicable to highly visible blog posts, such as ones that make the home page in popular communities such as LessWrong and Hacker News.
Like, if something makes the front page as one of the top posts, it attracts lots of eyeballs. With lots of eyeballs, you get more prestige and social status for saying something smart. So if a post has lots of attention, I’d expect lots of the smart-things-to-be-said to have been said in the comments.
On being downvoted
On mobile: double-tap the vote button (ignore a tool-tip telling you to hold).
I strongly suspect that many people don’t realize this and that it’d be better to have a button that says “strong upvote” instead.
Asking for help as an O(1) lookup
I agree that people should probably be able to reply to comments on their own posts. However, if enabling this is a non-trivial amount of work, I suspect the LW team’s time would be better spent elsewhere.
I base this on the presumptions that 1) there aren’t too many people this policy would help (dozens? single-digits?), 2) these people wouldn’t bring much value to the community, and 3) such a policy is unlikely to be deterring people we’d otherwise want from joining and contributing to the community.
To the extent you believe that Nonlinear has been a disfunctional environment, in significant part due to domineering characteristics of senior staff, I think that you should also believe that a junior family member beginning to work in this environment is going to have a hard time reasoning through and pushing back against it.
Successfully pushing back against is certainly difficult. Instead, I would expect, in general, Good Person to not have a very strong relationship with their brother, Bad Person, in the first place, and either not end up working with them or quitting once they started working with them and observed various bad things.
I wonder: is it appropriate to approach this situation from the perspective of gossip? As opposed to a perspective closer to formal legal systems?
I’m not sure. I suspect moderately strongly that a good amount of gossip is appropriate here, but that, at the same time, other parts of this should be approached from a more conservative and formal perspective. I worry that sticking one’s chin up in the air at the thought of gossiping is a Valley of Bad Rationality and something a midwit would do.
Robin Hanson has written a lot about gossip. It seems that social scientists see it as something that certainly has it’s place. From Scientific American’s The Science of Gossip: Why We Can’t Stop Ourselves:
Is Gossip Always Bad?
The aspect of gossip that is most troubling is that in its rawest form it is a strategy used by individuals to further their own reputations and selfish interests at the expense of others. This nasty side of gossip usually overshadows the more benign ways in which it functions in society. After all, sharing gossip with another person is a sign of deep trust because you are clearly signaling that you believe that this person will not use this sensitive information in a way that will have negative consequences for you; shared secrets also have a way of bonding people together. An individual who is not included in the office gossip network is obviously an outsider who is not trusted or accepted by the group.There is ample evidence that when it is controlled, gossip can indeed be a positive force in the life of a group. In a review of the literature published in 2004, Roy F. Baumeister of Florida State University and his colleagues concluded that gossip can be a way of learning the unwritten rules of social groups and cultures by resolving ambiguity about group norms. Gossip is also an efficient way of reminding group members about the importance of the group’s norms and values; it can be a deterrent to deviance and a tool for punishing those who transgress. Rutgers University evolutionary biologist Robert Trivers has discussed the evolutionary importance of detecting “gross cheaters” (those who fail to reciprocate altruistic acts) and “subtle cheaters” (those who reciprocate but give much less than they get). [For more on altruism and related behavior, see “The Samaritan Paradox,” by Ernst Fehr and Suzann-Viola Renninger; Scientific American Mind, Premier Issue 2004.]
Gossip can be an effective means of uncovering such information about others and an especially useful way of controlling these “free riders” who may be tempted to violate group norms of reciprocity by taking more from the group than they give in return. Studies in real-life groups such as California cattle ranchers, Maine lobster fishers and college rowing teams confirm that gossip is used in these quite different settings to enforce group norms when an individual fails to live up to the group’s expectations. In all these groups, individuals who violated expectations about sharing resources and meeting responsibilities became frequent targets of gossip and ostracism, which applied pressure on them to become better citizens. Anthropological studies of hunter-gatherer groups have typically revealed a similar social control function for gossip in these societies.
Anthropologist Christopher Boehm of the University of Southern California has proposed in his book Hierarchy in the Forest: The Evolution of Egalitarian Behavior (Harvard University Press, 1999) that gossip evolved as a “leveling mechanism” for neutralizing the dominance tendencies of others. Boehm believes that small-scale foraging societies such as those typical during human prehistory emphasized an egalitarianism that suppressed internal competition and promoted consensus seeking in a way that made the success of one’s group extremely important to one’s own fitness. These social pressures discouraged free riders and cheaters and encouraged altruists. In such societies, the manipulation of public opinion through gossip, ridicule and ostracism became a key way of keeping potentially dominant group members in check.
There were various suspicious/bad things Drew did.
Viewed in isolation, that could have a wide spectrum of explanations. Maybe we could call it weak-to-moderate evidence in favor of him being “bad”.
But then we have to factor in the choice he’s made to kinda hang around Emerson and Kat for this long. If we suppose[1] that we are very confident that Emerson and Kat are very bad people who’ve done very bad things, then, well, that doesn’t reflect very favorably on Drew. I think it is moderate-to-strong evidence that Drew is “bad”.
- ^
If you don’t believe this, then of course it wouldn’t make sense to view his hanging around Emerson and Kat as evidence of him being “bad”.
- ^
My guess about why this was downvoted would be that the downvoters, somewhat presumptuously, are assuming from the “If we’re alive in 5 years” part that you have very short timelines, they think this is foolish, and for that reason dislike the post.
FWIW, if that is in fact what happened, I very much disapprove of it. It isn’t crazy to have such short timelines. Plenty of reputable people take seriously the idea that timelines can be this short. I think it is reasonable to downvote people for posting about ideas that are incredibly implausible, but I think it is extremely hard to argue that such short timelines are that implausible.