Three-monkey mind.
Three-Monkey Mind
I was hoping that this post would be something of a Defense against the Dark Arts post, but it doesn’t seem to be:
[Dawkins saying “I do not intend to disparage trans people…”]
This doesn’t work. People see through it. It comes across as either dishonest or oblivious, and both make you look bad. If you want to genuinely engage in mistake theory discourse, do it in spaces where that’s actually the norm—academic seminars, private discussions, carefully-gated intellectual communities. Don’t perform neutrality in conflict arenas while making moves that clearly serve specific interests.
Suppose he actually wants to take the other side down a peg.
It seems to me that the only people who object to him playing power games are postmodernists, who think, by and large, that all communication is power games.
What is there to lose by playing power games with people who think that roughly all communication is power games?
Strong-downvoted for being indistinguishable from unedited ChatGPT output.
(That apparently Claude wrote it doesn’t matter.)
cybernetic models
What do you (all) mean by “cybernetic” here?
people who OOP
I’m assuming you’re not talking about object-oriented programming, but I can’t figure out what this acronym refers to.
I think a bunch of these kinds of things are plots for Seinfeld episodes.
That is an incredibly useful definition for a term I’ve seen floating around here for years — thanks!
…could it be put somewhere moderately prominent, where people can stumble over it?
I’m kind of hoping it could be somewhere prominent in the first page of results on https://www.lesswrong.com/search?query=hufflepuff. I’m looking at https://www.lesswrong.com/sequences/oyZGWX9WkgWzEDt6M and while your comment’s definition makes the page make sense, I wouldn’t be able to independently generate your comment’s definition from “comradery, reliability, trustworthiness, willingness to do physical work, willingness to stick with things for a long time, etc.”.
Is “Hufflepuff” (as a personality type) described anywhere concisely and more or less completely on LW? https://www.lesswrong.com/posts/DbdP8hD2AcKcdSsgF/project-hufflepuff-planting-the-flag seems like the closest thing to an explainer, but it seems incomplete. (https://www.greaterwrong.com/w/heroic-responsibility is exactly the sort of explainer that I’d want for Hufflepuff.)
And so, this post seems like a very bad example for some kinds of mind, because heroic responsibility is when you say “it doesn’t matter what role I have”, and so people who are blocked on imagining themselves as a business owner/leader would be put off by this instead of getting it.
Yeah, it’s not particularly heroic if you’re The Guy, even if it means you’re the one putting in 70-hour weeks fixing stuff that crops up to keep the business running because if you don’t fix it, nobody will, and the business will collapse.
Meanwhile, https://www.greaterwrong.com/w/heroic-responsibility — that I got to by clicking on the tag above the post that reads “Heroic Responsibility” — seems significantly clearer about the heroism aspect, and I don’t think having read HPMOR ages ago means it’s all that extra understandable compared to Joe Q. Public.
You’re not contradicting my point.
Pausing and thinking “should I just implement bubble sort, or should I go look up something that’s better for my use case and implement that instead” is work, and it’s not free.
Now, it might not be extra work the fourth time around (when you can knowingly choose and bang out block sort in your sleep just as easily as you can bubble sort…or say the thing in a way that doesn’t ruffle feathers at the dinner party), but it’s work initially, and isn’t free.
Often it seems to me like there’s free grace for the same amount of honesty.
It’s not free.
One has to think, often in advance of the dinner party or whatever, how to phrase something differently to get more grace for the same amount of honesty.
while also explicitly naming “rationalists” in your list of groups that are trying to destroy religion
Quite honestly, if he’s not mentioning things like that, then the rest of what he says comes across as lying by omission to people who are sick and tired of being lied to by omission.
I would expect that the net result of this talk is to makes anyone sympathetic to it discount the opinions of many of the people who’ve put the most work into understanding e.g. technical AI safety or AI governance.
My prior is that most of what is called “AI safety” work is Timnit compliance, not notkilleveryoneist work, but I’m open to updating.
p(doom)
How does one say this out loud?
Some days it’s hard to not start rooting for the paperclip maximizers.
Some days I actually do start rooting for the paperclip maximizers, but so far I’ve returned to not rooting for them in an hour or a day or two.
I’ve been chewing on the contents of this post for a week+ now.
I think the decision behind this post lurched my set point permanently towards, but not totally in, “root for the paperclip maximizers”, assuming habryka isn’t overridden or removed for this.
When a site that’s supposed to be humanity at its most rational removes one of its backstops against unimpeded woomongering in an attempt to get back authors who honestly seem happier and better-compensated writing on their Substacks, I’m tempted to cancel my pre-order of IABIED and shelve that one post that’s been rattling around in my head that amounts to “Given that CCP cooperation is essential for notkilleveryoneism to win, have any of you Bay Areans really thought about how an NGO push in the PRC is going to look to them, in light of all the other NGO/quango pushes that the US has been pushing that the CCP actively defends against because they obviously are bad for the CCP and/or PRC as a whole?”.
I change my mind too frequently on the paperclip-maximizer question to deactivate my account or let the domain registration for https://www.threemonkeymind.com/ lapse, but I’m updating strongly towards LW not being a place where I want to help raise the local sanity waterline, since this sort of work is actively being thwarted by the moderation team.
A related phenomenon: Right-leaning Supreme Court justices move left as they get older, possibly because they’re in a left-leaning part of the country (DC) and that’s where all their friends are.
A typical justice nominated by a Republican president starts out at age 50 as an Antonin Scalia and retires at age 80 as an Anthony Kennedy. A justice nominated by a Democrat, however, is a lifelong Stephen Breyer.
[…]
The Cocktail Scene. Maybe the justices — human as they are, after all — want to fit in at parties. “Justices may be subject to influences by the Beltway cocktail scene and want to be perceived as reasonable and moderate,” Josh Blackman, a Supreme Court scholar at the South Texas College of Law, told me in an email. That assumes the cocktail set is liberal, what with its law professors and journalists. But that stereotype does exist in D.C. President Richard Nixon, for example, explicitly wondered if his Supreme Court nominee Harry Blackmun could “resist the Washington cocktail party circuit.”
It sounds way more like “raise the sanity waterline of smart people” than “raise the sanity waterline of the population at large”. If they wanted to raise the sanity waterline of the population at large, they’d be writing books for high- and middle-schoolers.
Wouldn’t they need to make right- and left-handed manicules unless they went for, like, a hamsa hand?
And in case “build things” isn’t concrete enough (it might very well be, in at least the case of software development): ship things.
You can spend a lot of time “building” things, only to get mired in choices that likely won’t matter at all, or matter very little, or can be changed easily enough later.
I think this comment would be made way better with the inclusion of a concrete example or two. I know there’s at least one book out there that can get compressed to a sound bite like this, but a concrete example or two would help explain why.
Human color and space perception doesn’t work symmetrically across light and dark contrasts, so a well-designed dark website and a well-designed light website just look very different from each other on many dimensions. You can of course do it with CSS, but we are not talking about just inverting all the colors, we are talking about at the very least hand-picking all the shades, and realistically substantially changing the layout, spacing and structure of your app (so e.g. you don’t end up with large grey areas in a dark mode setting, which stand out vastly more in dark mode than equally high-contrast grey sections in a light mode).
I’m hoping the negative agreement karma for the parent comment isn’t for this — it’s just for “maintaining both a dark mode and a light mode design for a website is very hard” (emphasis added, as distinct from creation).
The above blockquote makes me want to say to the studio audience “Why are you booing — he’s right!”.
“You will be OK”, he says on the site started by the guy who was quite reasonably confident that nobody will be OK.