Associate yourself with people whom you can confidently and cheerfully outperform the Nash Equilibrium with.
lionhearted (Sebastian Marshall)
Oh this is wild. This generated a strange emotion.
Anyone here know the word “Angespannt”? One of my team members taught, German word with no exact English equivalent. We talked about it —
https://www.ultraworking.com/podcast/big-project-angespannt
“It’s a mix of tense and alert in a way. It’s like the feeling you get before you go on stage.”
Like, why should I care? I’m obviously not going to press the damn thing. And yet, simply knowing the button is there generates some tension and alertness.
Fascinating. Thank you for doing this.
(Well, sort of thank you, to be more precise...)
Hi Agnes, I just wanted to say — much respect and regards for logging on to discuss and debate your views.
Regardless if we agree or not (personally, I’m in partial agreement with you) — regardless, if more people would create accounts and engage thoughtfully in different spaces after sharing a viewpoint, the world would be a much better place.
Salutations and welcome.
This has been some heroic work. This place is back to one of my favorite places to read for inspiration and learning. Huge congrats and thanks to the whole team.
Hey, first just wanted to say thanks and love and respect. The moderation team did such an amazing job bringing LW back from nearly defunct into the thriving place it is now. I’m not so active in posting now, but check the site logged out probably 3-5 times a week and my life is much better for it.
After that, a few ideas:
(1) While I don’t 100% agree with every point he made, I think Duncan Sabien did an incredible job with “Basics of Rationalist Discourse”—https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1 - perhaps a boiled-down canonical version of that could be created. Obviously the pressure to get something like that perfect would be high, so maybe something like “Our rough thoughts on how to be a good a contributor here, which might get updated from time to time”. Or just link Duncan’s piece as “non-canonical for rules but a great starting place.” I’d hazard a guess that 90% of regular users here agree with at least 70% of it? If everyone followed all of Sabien’s guidelines, there’d be a rather high quality standard.
(2) I wonder if there’s some reasonably precise questions you could ask new users to check for understanding and could be there as a friendly-ish guidepost if a new user is going wayward. Your example—“(for example: “beliefs are probabilistic, not binary, and you should update them incrementally”)”—seems like a really good one. Obviously those should be incredibly non-contentious, but something that would demonstrate a core understanding. Perhaps 3-5 of those, maybe something that a person formally writes up some commentary on their personal blog before posting?
(3) It’s fallen from its peak glory years, but sonsofsamhorn.net might be an interesting reference case to look at — it was one of the top analytical sports discussion forums for quite a while. At the height of its popularity, many users wanted to join but wouldn’t understand the basics—for instance, that a poorly-positioned player on defense making a flashy “diving play” to get the baseball wasn’t a sign of good defense, but rather a sign that that player has a fundamental weakness in their game, which could be investigated more deeply with statistics—and we can’t just trust flashy replay videos to be accurate indicators of defensive skill. (Defense in American baseball is particularly hard to measure and sometimes contentious.) What SOSH did was create an area called “The Sandbox” which was relatively unrestricted — spam and abuse still weren’t permitted of course, but the standard of rigor was a lot lower. Regular members would engage in Sandbox threads from time to time, and users who made excellent posts and comments in The Sandbox would get invited to full membership. Probably not needed at the current scale level, but might be worth starting to think about for a long-term solution if LW keeps growing.
Thanks so much for everything you and the team do.
I’m a Westerner, but did business in China, have quite a few Chinese friends and acquaintances, and have studied a fair amount of classical and modern Chinese culture, governance, law, etc.
Most of what you’re saying makes sense with my experience, and a lot of Western ideas are generally regarded as either “sounds nice but is hypocritical and not what Westerns actually do” (a common viewpoint until ~10 years ago) with a later idea of “actually no, many young Westerners are sincere about their ideas—they’re actually just crazy in an ideological way about things that can’t and won’t work” that is a somewhat newer idea. (白左, etc)
The one place I might disagree with you is that I think mainland Chinese leadership tends to have two qualities that might be favorable towards understanding and mitigating AI risk:
(1) The majority of senior Chinese political leadership are engineers and seem intrinsically more open to having conversations along science and engineering lines than the majority of Western leadership. Pathos-based arguments, especially emerging from Western intellectuals, do not get much uptake in China and aren’t persuasive. But concerns around safety, second-order effects, third-order effects, complex system dynamics, causality, etc, grounded in scientific, mathematical, and engineering principles seem to be engaged with easily at face value in private conversations, and with a level of technical sophistication that there doesn’t need to be as much direct reliance on asking for industry leaders and specialists to explain and contextualize diagrams, concepts, technologies, etc. Senior Chinese leadership also seem to be better—this is just my opinion—at identifying credible and non-credible sources of technical information and identifying experts who make sound arguments grounded in causality. This is a very large advantage.
(2) In recent decades, it seems like mainland Chinese leadership are able to both operate on longer timescales—credibly making and implementing multi-decade plans and running them—as well as making rapid changes in technology adoption, regulation, and economic markets once a decision has been made in an area. The most common examples we see in the West are videos of skyscrapers being constructed very rapidly, but my personal example is I remember needing to go pay my rent with shoeboxes full of 100 renminbi notes during the era of Hu Jintao’s chairmanship and being quite shocked when China went to near cashless almost overnight.
I think those two factors—genuine understanding of engineering and technical causality, combined with greater viability for engaging in both longer timescale and short-timescale action, seem like important points worth mentioning.
So, I think it’s important that LessWrong admins do not get to unilaterally decide that You Are Now Playing a Game With Your Reputation.
Dude, we’re all always playing games with our reputations. That’s, like, what reputation is.
And good for Habyka for saying he feels disappointment at the lack of thoughtfulness and reflection, it’s very much not just permitted but almost mandated by the founder of this place —
https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism
https://www.lesswrong.com/posts/RcZCwxFiZzE6X7nsv/what-do-we-mean-by-rationality-1
Here’s the relevant citation from Well-Kept Gardens:
I confess, for a while I didn’t even understand why communities had such trouble defending themselves—I thought it was pure naivete. It didn’t occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power.
This too:
I have seen rationalist communities die because they trusted their moderators too little.
Let’s give Habryka a little more respect, eh? Disappointment is a perfectly valid thing to be experiencing and he’s certainly conveying it quite mildly and graciously. Admins here did a hell of a job resurrecting this place back from the dead, to express very mild disapproval at a lack of thoughtfulness during a community event is....… well that seems very much on-mission, at least according to Yudkowsky.
This is an interesting post — you’re covering a lot of ground in a wide-ranging fashion. I think it’s a virtual certainty that you’ll come with some interesting and very useful points, but a quick word of caution — I think this is an area where “mostly correct” theory can be a little dangerous.
Specifically:
>If you earn 4% per year, then you need the aforementioned $2.25 million for the $90,000 half-happiness income. If you earn 10% per year, you only need $900,000. If you earn 15% per year, you only need $600,000. At 18% you need $500,000; at 24% you need $375,000. And of course, you can acquire that nest egg a lot faster if you’re earning a good return on your smaller investments. [...] I’m oversimplifying a bit here. While I do think 24% returns (or more!) are achievable, they would be volatile.
You’re half correct here, but you might be making a subtle mistake — specifically, you might be using ensemble probability in a non-ergodic space.
Recommended reading (all of these can be Googled): safe withdrawal rate, expected value, variance, ergodicity, ensemble probability, Kelly criterion.
Specifically, naive expected value (EV) in investing tends to implicitly assume ergodicity; financial returns are non-ergodic; it’s very possible to wind up broke with near certainty even with high returns if your amount of capital deployed is too low for the strategy you’re operating.
Yes, there’s valid counter-counterarguments here but you didn’t make any of them! The words/phrases safety, margin of safety, bankroll, ergodicity, etc etc didn’t show up.
The best counterargument is probably low-capital-required arbitrage such as what Zvi described here; indeed, I followed his line of thinking and personally recently got pure arbitrage on this question — just for the hell of it, on nominal money. It’s, like, a hobby thing. [Edit: btw, thanks Zvi.] This is more-or-less only possible because some odd rules they’ve adopted for regulatory reasons and for UI/UX simplicity that result in some odd behavior.
Anyway, I digress; I like the general area of exploration you’re embarking on a lot, but “almost correct” in finance is super dangerous and I wanted to flag one instance of that. Consistent high returns on a small amount of capital does not seem like a good strategy to me; further, if you can get 24%+ a year on any substantial volume, you should probably just stack up some millions for a few years and then you could rely on passive returns after that without the intense amount of discipline needed to keep getting those returns (even setting aside ergodicity/bankroll issues).
Lynch’s One Up on Wall Street is an excellent take by someone who actually managed to make those type of returns for multiple decades; it’s not exactly something you do casually...
(Disclaimer: certainly not an expert, potentially some mistakes here, not comprehensive, etc etc etc.)
Great post.
By the way, taken to its logical conclusion —
People don’t move to new apartments frequently enough.
If your neighbors suck badly and you can’t influence them, or if you live in a place that’s badly maintained and the building management won’t do anything about it, you really should strongly consider moving.
You tell that to somebody, you’re likely to get to get one of the following arguments —
(1) That’d be too expensive (time, money, etc)! Possibly. I didn’t say the person should move, or should move immediately. Just said “strongly consider” — aka, run the math and search out options, see if you can be creative, consider doing a temporary solution like crashing with a friend or staying with your parents or finding some subsidized housing for a short period of time to bank cash and then get a better place. If your apartment is causing major lifestyle disruptions/headaches with any sort of frequently, I’m just saying you should strongly consider moving. I feel really strongly people should do this, because there’s been two or three times in my life that I moved too slowly, and I’d have been much better off taking a $1000-$3000 + dozens of hours of cost to move apartments, even if additionally a huge hassle even beyond those factors, because my life got hugely obviously better after moving. Just, at least, run an analysis of all the costs and research options and weigh it against expected value. I’m not saying you gotta do it, just you really ought to think about it.
(2) You don’t know what it’s like to be broke! Ah, the moral argument in favor of not even considering changing a bad situation. This argument is basically, “Don’t make me feel bad and don’t assert that I can have agency here.” This argument is kinda unfortunate, because “hey, dude/dudette, you should really consider moving given how much your living setup sucks and is getting you down” seems pretty reasonable and is usually a pretty friendly argument.
For the record, by the way, that second argument is false in my case — the nickname of one of my first apartments was “The House of Horrors.” Windows were partially broken in a ghetto Boston suburb. My bedroom got freezing cold in the winter. Lots of crime in the neighborhood, and regular rowdy behavior from patrons of local boozeries made getting a decent sleep on Friday and Saturday evenings a dice roll most weekends. (A dice roll I usually lost.) Kitchen was full of broken stuff, mold in the refrigerator, ceiling at times leaked water through a lighting fixture which umm, seemed dangerous.
One day I was sleeping in around 10AM and I woke up to hear a chainsaw from inside my own apartment. Like a horror movie — this was when the apartment got its nickname — and found out my landlord had decided to do something about the water-leaking-into-light-fixture problem and got a handy-man to chainsaw my ceiling but didn’t think to knock before letting himself in nor check my bedroom, just assuming I was out. So I woke up to a man with a chainsaw in my apartment chainsawing my kitchen ceiling. It wasn’t perhaps as dramatic as it sounds in text; nevertheless — somewhat unsettling.
So yeah, actually, I know what it’s like to be broke as fuck. Nevertheless — while amusing years later, I ought to have at least strongly considered moving sooner. It seems a bit irrational in retrospect to not strongly consider it sooner. Life got a lot better once I did.
First, I think promoting and encouraging higher standards is, if you’ll pardon the idiom, doing God’s work.
Thank you.
I’m so appreciative any time any member of a community looks to promote and encourage higher standards. It takes a lot of work and gets a lot of pushback and I’m always super appreciative when I see someone work at it.
Second, and on a much smaller note, if I might offer some......… stylistic feedback?
I’m only speaking here about my personal experience and heuristics. I’m not speaking for anyone else. One of my heuristics — which I darn well know isn’t perfectly accurate, but it’s nevertheless a heuristic I implicitly use all the time and which I know others use — is looking at language choices made when doing a quick skim of a piece as a first-pass filter of the writer’s credibility.
It’s often inaccurate. I know it. Still, I do it.
Your writing sometimes, when you care about an issue, seems to veer very slightly into resembling the writing of someone who is heated up about a topic in a way that leads to less productive and coherent thought.
This leads my default reaction to discounting the credibility of the message slightly.
I have to forcibly remind myself not to do that in your case, since you’re actually taking pretty cohesive and intelligent positions.
As a small example:
These are all terrible ideas.
These are all
terrible
ideas.
I’m going to say it a third time, because LessWrong is not yet a place where I can rely on my reputation for saying what I actually mean and then expect to be treated as if I meant the thing that I actually said: I recognize that these are terrible ideas.
I just — umm, in my personal… umm.… filters… it doesn’t look good on a skim pass. I’m not saying emulate soul-less garbage at the expense of clarity. Certainly not. I like your ideas a lot. I loved Concentration of Force.
I’m just saying that, on the margin, if you edited down some of the first-person language and strong expressions of affect a little bit in areas where you might be concerned about it being “not yet a place where I can rely on my reputation for saying what I actually mean”… it might help credibility.
I’ve written quite literally millions of words in my life, so I can say from firsthand experience that lines like that do successfully pre-empt stupid responses so you get less dumb comments.
That’s true.
But I think it’s likely you take anywhere from a 10% to 50% penalty to credibility to many casual skimmers of threads who do not ever bother to comment (which, incidentally, is both the majority of readers and me personally in 2021).
I see things like the excerpted part, and I have to consciously remind myself not to apply a credibility discount to what you’re saying, because (in my experience and perhaps unfairly) I pattern match that style to less credible people and less credible writing.
Again, this is just a friendly stylistic note. I consider myself a fan. If I’m mistaken or it’d be expensive to implement an editing filter for toning that down, don’t bother — it’s not a huge deal in the grand scheme of things, and I’m really happy someone is working on this.
I suppose I’m just trying to improve the good guys’ effectiveness for concentration of force reasons, you could say.
Salut and thanks again.
“Rae, this is a friendly reminder from the universe that you can only at best control the first-order effects of systems you create...”
In a way that’s mildly subtly intimidating, in order to bring out the Bruce in the other person. I seem to recall a study that showed that when randomly dividing sports players into wearing red jerseys and blue jerseys, the red team won a statistically significant larger percentage of the time—maybe a 1% edge or something from red?
So I’d go clean, straight lines on a strong red clothing, maybe with a little black mixed in, impeccable grooming, and otherwise just look you’re going to win. If it makes someone say “fuck it” and not do the combat math in their head just one time because your opponent has mentally crumbled, then your odds are improved.
Huh. Interesting.
I had literally the exact same experience before I read your comment dxu.
I imagine it’s likely that Duncan could sort of burn out on being able to do this [1] since it’s pretty thankless difficult cognitive work. [2]
But it’s really insightful to watch. I do think he could potentially tune up [3] the diplomatic savvy a bit [4] since I think while his arguments are quite sound [5] I think he probably is sometimes making people feel a little bit stupid via his tone. [6]
Nevertheless, it’s really fascinating to read and observe. I feel vaguely like I’m getting smarter.
###
Rigor for the hell of it [7]:
[1] Hedged hypothesis.
[2] Two-premise assertion with a slightly subjective basis, but I think a true one.
[3] Elaborated on a slightly different but related point further in my comment below to him with an example.
[4] Vague but I think acceptably so. To elaborate, I mean making one’s ideas even when in disagreement with a person palatable to the person one is disagreeing with. Note: I’m aware it doesn’t acknowledge the cost of doing so and running that filter. Note also: I think, with skill and practice, this can be done without sacrificing the content of the message. It is almost always more time-consuming though, in my experience.
[5] There’s some subjective judgments and utility function stuff going on, which is subjective naturally, but his core factual arguments, premises, and analyses basically all look correct to me.
[6] Hedged hypothesis. Note: doesn’t make a judgment either way as to whether it’s worth it or not.
[7] Added after writing to double-check I’m playing by the rules and clear up ambiguity. “For the hell of it” is just random stylishness and can be safely mentally deleted.
(Or perhaps, if I introspect closely, a way to not be committed to this level of rigor all the time. As stated below though, minor stylistic details aside, I’m always grateful whenever a member of a community attempts to encourage raising and preserving high standards.)
Or, if we want to go all max-Schelling at the risk of veering almost into Stalinism, tell people they’ll get a karma bounty for pressing it but then coordinate with LW, CFAR, MIRI, and various meetups to ban that person for life from everything if the actually do it. 😂
First — congratulations.
Second — an observation and a bit of an abstract question.
Observation: it seems to me that it’s often the most introspective, pro-social, and thoughtful people that seem to wrestle with things like shame and potentially damaging self-concept.
Can you think of why that might be true? Obviously I don’t know you super well, but you always came across like a very admirable person to me; i.e., exactly the type of person that would benefit least from rumination and feelings of shame or anxiety that might lead to some sort of paralysis.
It seems to me the more pro-social, reflective, and thoughtful someone is, the most ideal position for society would be for that person to go most confidently through life, no? Yes, of course, everyone gets some stuff wrong, and you don’t want to shut down introspection, but...… I wonder why this is? Is it being very thoughtful causes both pro-sociality and rumination/shame/anxiety? Or that going through a round of heavy rumination makes one more pro-social? Or that becoming pro-social leads one to higher standards and more rumination?
Trying to navigate the cause-and-effect a little bit, but it seems like a darn shame to me.
Congrats again, of course — and any thoughts on why the general case occurs?
What an incredible experience.
Felt like I got to understand myself a bit better, got exposed to a variety of arguments I never would have anticipated, forced to clarify my own thoughts and implications, did some math, did some sanity-check math on “what’s the value of destroying some of Ben Pace’s faith in humanity” (higher than any reasonable dollar amount alone, incidentally — and that’s just one variable)… and yeah, this was really cool and legit innovative.
We should make sure the word about this gets out more.
We need more people on LessWrong, and more stuff like this.
People thinking this is just a chat board should think a little bigger. There’s some real visionary thinking going on here, and an exceptionally smart and thoughtful community. I’m really grateful I got to see and participate in this. Thanks for all the great work — and for trusting me. Seriously. Y’all are aces.
Huh, I’d never heard of that. Great story. Thanks for sharing -
http://en.wikipedia.org/wiki/Gaius_Mucius_Scaevola
“I am Gaius Mucius, a citizen of Rome. I came here as an enemy to kill my enemy, and I am as ready to die as I am to kill. We Romans act bravely and, when adversity strikes, we suffer bravely.” He also declared that he was one of three hundred other Romans willing to give their own life to kill Porsenna.(Ab Urbe Condita, II.12) Porsenna, fearful and angry, ordered Mucius to be cast into the flames. Mucius stoically accepted this punishment, preempting Porsenna by thrusting his hand into that same fire and giving no sign of pain. Impressed by the youth’s courage, Porsenna freed Mucius.
Hey—to preface—obviously I’m a great admirer of yours Kaj and I’ve been grateful to learn a lot from you, particularly in some of the exceptional research papers you’ve shared with me.
With that said, of course your emotions are your own but in terms of group ethics and standards, I’m very much in disagreement.
The upset feels similar to what I’ve previously experienced when something that’s obviously a purely symbolic gesture is treated as a Big Important Thing That’s Actually Making A Difference.
On the one hand, you’re totally right. On the other hand, basically the entire world is made up of abstractions along these lines. What can the Supreme Court opinion in Marbury vs Madison be recognized as other than a purely symbolic gesture? Madison wasn’t going to deliver the commissions, Justice Marshall (no relation) knew that for sure, and he made a largely symbolic gesture in how he navigated the thing. It had no practical importance for a long time but now forms one of the foundations of American jurisprudence effecting, indirectly, billions of lives. But at the time, if you dig into the history, it really was largely symbolic at the time.
The world is built out of all sorts of abstract symbolism and intersubjective convention.
That by itself wouldn’t trigger the reaction; the world is full of purely symbolic gestures that are claiming to make a difference, but they mostly haven’t upset me in a long time. Some of the communication around Petrov Day has. I think it’s because of a sense that this idea is being pushed on people-that-I-care-about as something important despite not actually being in accordance to their values, and that there’s social pressure for people to be quiet about it and give in to the social pressure at a cost to their epistemics.
Canonical reply is this one:
https://www.lesswrong.com/s/pvim9PZJ6qHRTMqD3/p/7FzD7pNm9X68Gp5ZC
(“Canonical” was intentionally chosen, incidentally.)
I feel like Oliver’s comment is basically saying “people should have taken this seriously and people who treat this light-heartedly are in the wrong”. It’s spoken from a position of authority, and feels like it’s shaming people whose main sin is that they aren’t particularly persuaded by this ritual actually being significant, as no compelling reason for this ritual actually being significant has ever been presented.
https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism
From Well-Kept Gardens:
In any case the light didn’t go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently. [...] I have seen rationalist communities die because they trusted their moderators too little.
Honestly, for anything that wasn’t clearly egregiously wrong, I’d support the leadership team on here even if my feelings ran in a different direction. Like, leadership is hard. Really really really hard. If there was something I didn’t believe in, I’d just quietly opt out.
Now, I fully understand I’m in the minority on this position — but I’m against both ‘every interpretation is valid’ type thinking (why would every interpretation be valid as it relates to a group activity where your behavior effects the whole group?).
Likewise, pushing back against “shaming people whose main sin is that they aren’t particularly persuaded by this ritual actually being significant” — isn’t that actually both good and necessary if we want to be able to coordinate and actually solve problems?
There’s a dozen or so Yudkowsky citations about this. Here’s another:
https://www.lesswrong.com/posts/KsHmn6iJAEr9bACQW/bayesians-vs-barbarians
Let’s say we have two groups of soldiers. In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy. In group 2, everyone at all levels knows all about tactics and strategy.
Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?
In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.
And finally,
Now it may be the case—a more agreeable part of me wants to interject—that this ritual actually is important, and that it should be treated as more than just a game.
But.
If so, I have never seen a particularly strong case being made for it.
I made that case last year extensively:
I even did, like, math and stuff. The “shut up and multiply” thing.
Long story short — I think shared trust and demonstrated cooperation are super valuable, good leadership is incredibly underappreciated, and whimsical defection is really bad.
Again though — all written respectfully, etc etc, and I know I’m in the minority position here in terms of many subjective personal values, especially harm/care and seriousness/fun.
Finally, it’s undoubtedly true my estimate of the potential utility of building out a base of successfully navigated low-stakes cooperative endeavors is undoubtedly multiple orders of magnitude higher than others. I put the dollar-value of that as, actually, pretty high. Reasonable minds can differ on many of these points, but that’s my logic.
Nooooo you’re a good person but you’re promoting negotiating with terrorists literally boo negative valence emotivism to highlight third-order effects, boo, noooooo................
>I broke even against the Nash programs, utterly crushed vulnerable programs, and lost a non-trivial amount to only one program, a resounding heads-up defeat handed to me by the only other top-level gamer in the room, fellow Magic: the Gathering semi-pro player Eric Phillips.
Great series.
Do you have the win/loss stats or final amounts by strategy? Or a rough approximation from memory?
Perhaps the most insightful comment I ever read on Hacker News went something like,
I can’t find the exact comment but I found that very insightful.