Associate yourself with people whom you can confidently and cheerfully outperform the Nash Equilibrium with.
lionhearted (Sebastian Marshall)
I agree that drive-by unpleasant criticisms without substance (“Obtuse”) don’t seem productive, but I actually think some of the mild “tonally unpleasant” ones could be very valuable. It’s a way for an author to inexpensively let a commenter know that they didn’t appreciate the comment.
“Not what I meant” seems particularly valuable for when someone mis-summarizes or inferences wrongly what was written, and “Not worth getting into” seems useful when someone who unproductively deep on a fine-grained detail of something more macro oriented.
One challenge, though, is when you have mixed agreement with someone. I disagree on tonal unpleasantness and the grouping style—“Taboo your words” might be friendly, for instance, to keep sharpening discussion, and isn’t necessarily critical. But I agree with a meta/bikeshed and clearing up some of the ambiguous ones.
I clicked both “Disagree” and “Agree” on yours for partial agreement / mixed agreement, but that seems kind of unintuitive.
Not sure how many posts you’ve made here or elsewhere, but as someone who has done a lot of public writing this seems like a godsend. It will reflect poorly on someone who deploys those a lot in a passive aggressive way, but we’ve all seen threads that are exhausting to the original poster.
This seems particularly useful for when someone makes a thoughtful but controversial point that spurs a lot of discussion. The ability to acknowledge you read someone’s comment without deeply engaging with it is particularly useful in those cases.
I turned this on for a recent post and I’m incredibly impressed.
This is the coolest feature I’ve seen for discussion software in many years.
Highly recommended to try it out if you make a post.
Devil’s Advocate: Adverse Selection Against Conscientiousness
I’m a Westerner, but did business in China, have quite a few Chinese friends and acquaintances, and have studied a fair amount of classical and modern Chinese culture, governance, law, etc.
Most of what you’re saying makes sense with my experience, and a lot of Western ideas are generally regarded as either “sounds nice but is hypocritical and not what Westerns actually do” (a common viewpoint until ~10 years ago) with a later idea of “actually no, many young Westerners are sincere about their ideas—they’re actually just crazy in an ideological way about things that can’t and won’t work” that is a somewhat newer idea. (白左, etc)
The one place I might disagree with you is that I think mainland Chinese leadership tends to have two qualities that might be favorable towards understanding and mitigating AI risk:
(1) The majority of senior Chinese political leadership are engineers and seem intrinsically more open to having conversations along science and engineering lines than the majority of Western leadership. Pathos-based arguments, especially emerging from Western intellectuals, do not get much uptake in China and aren’t persuasive. But concerns around safety, second-order effects, third-order effects, complex system dynamics, causality, etc, grounded in scientific, mathematical, and engineering principles seem to be engaged with easily at face value in private conversations, and with a level of technical sophistication that there doesn’t need to be as much direct reliance on asking for industry leaders and specialists to explain and contextualize diagrams, concepts, technologies, etc. Senior Chinese leadership also seem to be better—this is just my opinion—at identifying credible and non-credible sources of technical information and identifying experts who make sound arguments grounded in causality. This is a very large advantage.
(2) In recent decades, it seems like mainland Chinese leadership are able to both operate on longer timescales—credibly making and implementing multi-decade plans and running them—as well as making rapid changes in technology adoption, regulation, and economic markets once a decision has been made in an area. The most common examples we see in the West are videos of skyscrapers being constructed very rapidly, but my personal example is I remember needing to go pay my rent with shoeboxes full of 100 renminbi notes during the era of Hu Jintao’s chairmanship and being quite shocked when China went to near cashless almost overnight.
I think those two factors—genuine understanding of engineering and technical causality, combined with greater viability for engaging in both longer timescale and short-timescale action, seem like important points worth mentioning.
Hmm. Looks like I was (inadvertently) one of the actors in this whole thing. Not intended and unforeseen. Three thoughts.
(1) At the risk of sounding like a broken record, I just wanna say thanks again to the moderation team and everyone who participates here. I think oftentimes the “behind the scenes coordination work” doesn’t get noticed during all the good times and not enough credit is noticed. I just like to notice it and say it outright. For instance, I went to the Seattle ACX meetup yesterday which I saw on here (LW), since I check ACX less frequently than LW. I had a great time and had some really wonderful conversations. I’m appreciative of all the people facilitating that, including Spencer (Seattle meetup host) and the whole team that built the infrastructure here to facilitate sharing information, getting to know each other, etc.
(2) Just to clarify—not that it matters—my endorsement of Duncan’s post was about the specific content in it, not about any the author of the post. I do think Duncan did a really nice job taking very complex concepts and boiling them down to guidelines like “Track (for yourself) and distinguish (for others) your inferences from your observations” and “Estimate (for yourself) and make clear (for others) your rough level of confidence in your assertions” — he really summed up some complex points very straightforwardly and in a way that makes the principles much easier to implement / operationalize in one’s writing style. That said, I didn’t realize when I endorsed the Rationalist Discourse post that there was some interpersonal tensions independent from the content itself. Both of those posters seem like decent people to me, but I haven’t dug deep on it and am not particularly informed on the details.
(3) I won’t make a top-level post about this, because second-degree meta-engagement with community mechanics risks setting off more second-degree and third-degree meta-engagement, and the things spiral. But as a quick recommendation to people interested in how people relate with each other, my favorite movie is Unforgiven, a very non-traditional Clint Eastwood movie. It’s like a traditional Western (cowbows, horses, etc) but really very different than the normal genre. Basically, there’s only one genuinely unprovoked “bad guy” in the movie, who has causal agency for only about 30-60 seconds of doing something bad. After that, it’s all just a chain reaction of people doing as best as they can by their values and friends, and yet the results are very bad for everyone. Incidentally, it’s also a really cinematically beautiful movie, which contrasts with the unfolding tragedy. It’s a great movie. Highly recommended.
A bit late in commenting and I understand the “mongo” example was pointing at a more general concept, but I decided to check in on the current state of prediction. Not perfect, n=1, could certainly be set out better, but thought I’d give this a whirl:
Hello, I’d like to test your predictive ability on something interesting and novel. May we?
Hello! Of course, I’d be happy to help you with your prediction. What would you like me to predict?
First, some context — I’m an American tech CEO. I like and have read a lot of classical philosophy, ranging from Aristotle to Wittgenstein and much in between. I’m an amateur historian who has written a few million words of nonfiction with some historical fiction mixed in. I’m logical and technical. I’m in my late 30s. I’m from the East Coast but in Seattle. That’s the context.
Thank you for the context. Is there a specific question or topic you would like me to predict about based on your background and interests?
Yes. I’m going to think of a fictional book. It’s a book I like—one of my favorites. I’d like you to make a list of 5 guesses of what book I might be thinking of, along with what the most likely of the 5 is.
Certainly, I’ll do my best to generate a list of 5 guesses for your favorite book based on your background and interests. Here are my predictions:
“The Brothers Karamazov” by Fyodor Dostoevsky—This novel explores complex philosophical and religious themes, which aligns with your interest in classical philosophy. Its exploration of morality and ethics would also appeal to your logical and technical mindset.
“The Name of the Rose” by Umberto Eco—This historical mystery novel is set in a medieval monastery and draws upon elements of philosophy and theology, which would align with your interests in both history and philosophy.
“Infinite Jest” by David Foster Wallace—This novel combines humor, philosophy, and commentary on contemporary American culture, which would align with your interests in both philosophy and fiction. Its complexity would also appeal to your logical and technical mindset.
“The Hitchhiker’s Guide to the Galaxy” by Douglas Adams—This satirical science fiction novel uses humor to explore philosophical and existential themes, which would appeal to your interest in philosophy. Its wit and creativity would also appeal to your literary and imaginative side.
“Slaughterhouse-Five” by Kurt Vonnegut—This anti-war novel uses elements of science fiction to explore the themes of free will and the nature of time, which would align with your interest in both history and science fiction. Its blend of humor and tragedy would also appeal to your literary sensibilities.
Based on your interests and background, my most likely guess for your favorite book would be “The Brothers Karamazov” by Fyodor Dostoevsky.
Ah, not so. Let’s narrow down a bit. It’s science fiction. Would you try again?
Thank you for the clarification. Here are 5 new predictions based on your background and interest in science fiction:
“Dune” by Frank Herbert—This classic science fiction novel is known for its complex world-building and exploration of political and philosophical themes, which align with your interests in history, philosophy, and science fiction.
“Snow Crash” by Neal Stephenson—This cyberpunk novel explores themes of language, culture, and technology, which align with your interest in technical and technological topics. Its blend of humor and action would also appeal to your literary and imaginative side.
“Ender’s Game” by Orson Scott Card—This science fiction novel explores themes of morality, leadership, and the consequences of war, which align with your interests in history, philosophy, and science fiction. Its blend of action and psychological complexity would also appeal to your literary and imaginative side.
“The Diamond Age” by Neal Stephenson—This novel explores themes of technology, culture, and education, which align with your interest in technical and technological topics. Its exploration of political and social issues would also appeal to your interest in history.
“Hyperion” by Dan Simmons—This science fiction novel is known for its complex world-building and exploration of philosophical and theological themes, which align with your interest in classical philosophy. Its structure, which is composed of interconnected stories, would also appeal to your literary sensibilities.
Based on your interest in science fiction, my most likely guess for your favorite book would be “Dune” by Frank Herbert.
Not a bad guess—I like Dune a lot. I was thinking of Asimov’s “Foundation” though.
Ah, I see! “Foundation” is a classic work of science fiction and definitely a great choice. I apologize for not guessing it correctly on the first try, but I hope my predictions were still interesting and useful. Thank you for the opportunity to test my predictive abilities!
Can we hypothesize a little on why you didn’t get Foundation? It combines technology, history, and philosophy...
Certainly! While I cannot say for sure why I did not predict “Foundation” as your favorite book, one possible reason could be that your background and interests were broad enough to support several different science fiction titles. Additionally, while “Foundation” does incorporate elements of technology, history, and philosophy, these themes are also present in many other science fiction works, making it challenging to narrow down to one specific book.
Another possibility is that the themes and style of “Foundation” may not have stood out as strongly to me based on the limited information I had about your interests and background. While I did take into account your interest in history and philosophy, other factors such as your preference for a specific writing style or subgenre of science fiction may have been important in predicting your favorite book.
I’ve read it. There was some back-and-forth between him and Tegmark on the topic.
n=3 AI Risk Quick Math and Reasoning
Hey, first just wanted to say thanks and love and respect. The moderation team did such an amazing job bringing LW back from nearly defunct into the thriving place it is now. I’m not so active in posting now, but check the site logged out probably 3-5 times a week and my life is much better for it.
After that, a few ideas:
(1) While I don’t 100% agree with every point he made, I think Duncan Sabien did an incredible job with “Basics of Rationalist Discourse”—https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1 - perhaps a boiled-down canonical version of that could be created. Obviously the pressure to get something like that perfect would be high, so maybe something like “Our rough thoughts on how to be a good a contributor here, which might get updated from time to time”. Or just link Duncan’s piece as “non-canonical for rules but a great starting place.” I’d hazard a guess that 90% of regular users here agree with at least 70% of it? If everyone followed all of Sabien’s guidelines, there’d be a rather high quality standard.
(2) I wonder if there’s some reasonably precise questions you could ask new users to check for understanding and could be there as a friendly-ish guidepost if a new user is going wayward. Your example—“(for example: “beliefs are probabilistic, not binary, and you should update them incrementally”)”—seems like a really good one. Obviously those should be incredibly non-contentious, but something that would demonstrate a core understanding. Perhaps 3-5 of those, maybe something that a person formally writes up some commentary on their personal blog before posting?
(3) It’s fallen from its peak glory years, but sonsofsamhorn.net might be an interesting reference case to look at — it was one of the top analytical sports discussion forums for quite a while. At the height of its popularity, many users wanted to join but wouldn’t understand the basics—for instance, that a poorly-positioned player on defense making a flashy “diving play” to get the baseball wasn’t a sign of good defense, but rather a sign that that player has a fundamental weakness in their game, which could be investigated more deeply with statistics—and we can’t just trust flashy replay videos to be accurate indicators of defensive skill. (Defense in American baseball is particularly hard to measure and sometimes contentious.) What SOSH did was create an area called “The Sandbox” which was relatively unrestricted — spam and abuse still weren’t permitted of course, but the standard of rigor was a lot lower. Regular members would engage in Sandbox threads from time to time, and users who made excellent posts and comments in The Sandbox would get invited to full membership. Probably not needed at the current scale level, but might be worth starting to think about for a long-term solution if LW keeps growing.
Thanks so much for everything you and the team do.
I had a personal experience that strongly suggests that this is at least partially true.
I had a mountaineering trip in a remote location that went off the rails pretty badly — it was turning into a classical “how someone dies in the woods” story. There was a road closure some miles ahead of where I was supposed to drive, I hiked an extra 8 miles in, missed the correct trail, tried to take a shortcut, etc etc—it got ugly.
I felt an almost complete lack of distress or self-pity the entire time. I was just very focused methodically on orienting around my maps and GPS and getting through the next point.
I was surprised at how little negative internal discourse or negative emotions I felt. So, n=1 here, but it was very informative for me.
This isn’t necessarily “Come for the instrumentality, stay for the epistemology” — but, maybe.
Come for the productivity, stay for the philosophy
broke peace first.
Have you read “Metaphors We Live By” by Lakoff?
The first 20 pages or so are almost a must-read in my opinion.
Highly recommended, for you in particular.
A Google search with filetype:pdf will find you a copy. You can skim it fast — not needed to close read it — and you’ll get the gems.
Edit for exhortation: I think you’ll get a whole lot out of it such that I’d stake some “Sebastian has good judgment” points on it that you can subtract from my good judgment rep if I’m wrong. Seriously please check it out. It’s fast and worth it.
I had literally the exact same experience before I read your comment dxu.
I imagine it’s likely that Duncan could sort of burn out on being able to do this  since it’s pretty thankless difficult cognitive work. 
But it’s really insightful to watch. I do think he could potentially tune up  the diplomatic savvy a bit  since I think while his arguments are quite sound  I think he probably is sometimes making people feel a little bit stupid via his tone. 
Nevertheless, it’s really fascinating to read and observe. I feel vaguely like I’m getting smarter.
Rigor for the hell of it :
 Hedged hypothesis.
 Two-premise assertion with a slightly subjective basis, but I think a true one.
 Elaborated on a slightly different but related point further in my comment below to him with an example.
 Vague but I think acceptably so. To elaborate, I mean making one’s ideas even when in disagreement with a person palatable to the person one is disagreeing with. Note: I’m aware it doesn’t acknowledge the cost of doing so and running that filter. Note also: I think, with skill and practice, this can be done without sacrificing the content of the message. It is almost always more time-consuming though, in my experience.
 There’s some subjective judgments and utility function stuff going on, which is subjective naturally, but his core factual arguments, premises, and analyses basically all look correct to me.
 Hedged hypothesis. Note: doesn’t make a judgment either way as to whether it’s worth it or not.
 Added after writing to double-check I’m playing by the rules and clear up ambiguity. “For the hell of it” is just random stylishness and can be safely mentally deleted.
(Or perhaps, if I introspect closely, a way to not be committed to this level of rigor all the time. As stated below though, minor stylistic details aside, I’m always grateful whenever a member of a community attempts to encourage raising and preserving high standards.)
First, I think promoting and encouraging higher standards is, if you’ll pardon the idiom, doing God’s work.
I’m so appreciative any time any member of a community looks to promote and encourage higher standards. It takes a lot of work and gets a lot of pushback and I’m always super appreciative when I see someone work at it.
Second, and on a much smaller note, if I might offer some......… stylistic feedback?
I’m only speaking here about my personal experience and heuristics. I’m not speaking for anyone else. One of my heuristics — which I darn well know isn’t perfectly accurate, but it’s nevertheless a heuristic I implicitly use all the time and which I know others use — is looking at language choices made when doing a quick skim of a piece as a first-pass filter of the writer’s credibility.
It’s often inaccurate. I know it. Still, I do it.
Your writing sometimes, when you care about an issue, seems to veer very slightly into resembling the writing of someone who is heated up about a topic in a way that leads to less productive and coherent thought.
This leads my default reaction to discounting the credibility of the message slightly.
I have to forcibly remind myself not to do that in your case, since you’re actually taking pretty cohesive and intelligent positions.
As a small example:
These are all terrible ideas.
These are all
I’m going to say it a third time, because LessWrong is not yet a place where I can rely on my reputation for saying what I actually mean and then expect to be treated as if I meant the thing that I actually said: I recognize that these are terrible ideas.
I just — umm, in my personal… umm.… filters… it doesn’t look good on a skim pass. I’m not saying emulate soul-less garbage at the expense of clarity. Certainly not. I like your ideas a lot. I loved Concentration of Force.
I’m just saying that, on the margin, if you edited down some of the first-person language and strong expressions of affect a little bit in areas where you might be concerned about it being “not yet a place where I can rely on my reputation for saying what I actually mean”… it might help credibility.
I’ve written quite literally millions of words in my life, so I can say from firsthand experience that lines like that do successfully pre-empt stupid responses so you get less dumb comments.
But I think it’s likely you take anywhere from a 10% to 50% penalty to credibility to many casual skimmers of threads who do not ever bother to comment (which, incidentally, is both the majority of readers and me personally in 2021).
I see things like the excerpted part, and I have to consciously remind myself not to apply a credibility discount to what you’re saying, because (in my experience and perhaps unfairly) I pattern match that style to less credible people and less credible writing.
Again, this is just a friendly stylistic note. I consider myself a fan. If I’m mistaken or it’d be expensive to implement an editing filter for toning that down, don’t bother — it’s not a huge deal in the grand scheme of things, and I’m really happy someone is working on this.
I suppose I’m just trying to improve the good guys’ effectiveness for concentration of force reasons, you could say.
Salut and thanks again.
There’s a very thorough paper published in the American Journal of Epidemiology, “Use of a prescribed ephedrine/caffeine combination and the risk of serious cardiovascular events: a registry-based case-crossover study”, DOI: 10.1093/aje/kwn191
Apparently, and this really surprised me,
“Use of prescribed ephedrine in Denmark — Letigen was a pharmaceutical product containing 20 mg of synthetic ephedrine and 200 mg of caffeine, available only by prescription. Its recommended dose was 1–3 tablets per day, depending on the user’s tolerance. It was approved for sale in Denmark in 1990. During the peak of its use in 1999, some 110,000 persons, corresponding to 2% of the Danish population, were treated. In 2002, the marketing license was suspended, after a number of reports had suggested a safety problem.”
So there’s a pretty big sample there.
Now note, I’m not a doctor and this just my opinion — it seems that some people should never take ephedrine under any circumstances (certain heart problems or family history of certain heart problems, etc) and anyone else ought to be really quite careful taking it if it’s legal and approved in one’s jurisdiction.
Ephedrine increases metabolic activity and thermogenesis — heat production — and it’s more dangerous when it’s hot outside, when you’re doing any aerobic activity, or if you’ve had any other stressors on one’s heart or get into other contraindication with stressors.
Speculatively, it seems possible that safety rates in Denmark might be higher than elsewhere since it doesn’t get very hot there. If you compared someone using ephedrine/caffeine in Siberia in the winter to Dubai in the summer, the increased thermogenesis and physically radiating more heat might seem like a beneficial side effect in an arctic blizzard whereas both uncomfortable and dangerous under a desert sun.
I’m going off the top of my head here since I don’t have a copy in front of me, but I remember some very persuasive arguments and citations in the (terribly titled but otherwise quite good) book Extreme Productivity by Bob Pozen.
Basically, Pozen’s cited studies found the steady approach pays off on basically every dimension you’d care about (including quality and quantity of the work, efficiency, and decreased various badness). I found it pretty persuasive and switched from working in intense bursts to a more methodical way when writing, for the next few years, and it worked well for me. I got the time it took me to write a 6000 word essay down from ~40 hours to the 12-18 hour range, quality was better, and it was less stressful.
Doesn’t necessarily generalize, and I’d speculate it maybe generalizes least for things that benefit from being at some critical mass threshold for a short period of time (say, like, an auction). That part is just speculation thought.
Re: the Repugnant Conclusion, it’s not necessarily my opinion, but there’s a coherent set of moral principles that values A+ over A but also A+ over B-.
It might come from something like rejecting diminishing marginal utility as relates to certain very big questions — thinking that yes, Mozart + five otherwise uncreated good lives of new musicians is better than Mozart alone, but a world of six musicians substantially worse than Mozart is worse than either just Mozart+0 or Mozart+5.
Partially agreed again.
I’d be hesitant to label as “Critical” pointing out that someone has an invalid argument, and having it implicitly contrasted against “Positive” — it implies they’re opposites or antithetical in some way, y’know?
Also, respectfully disagree with this -
Sometimes a commentor, especially someone new, is just badly off the mark. That’s not a two-way street problem, it’s a Well-Kept Garden problem...