“I’m curious: how do you estimate the product of seven and five?”
Sanjay Vakil to Colin Percival on Hacker News: “Did you win the Putnam?”
Benquo vs. Ziz on “Punching Evil” is prescient in hindsight.
The slapfight on “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” is peak drama but would need a lot of editing.
Zack_M_Davis
“It’s over, Haltmann,” said Kirby. “You’ve lost. Disable the Access Ark.”
Haltmann had lost a lot of blood but could still move and speak. “No,” he croaked.
“Do it or I’ll kill you,” said Kirby.
Haltmann laughed. “A threat? You insult my knowledge of decision theory, Kirby. The Access Ark only responds to my biosignature. If you want me to do something for you, you’ll have to offer me something I want.”
Kirby shrugged and ate Haltmann. “Great, now that I have his biosignature—”
A turret on the wall shot balls of some sort of sticky substance at Kirby. It smelled like peanut butter. “What—what—” Kirby gasped, as his throat began to swell up.
“And his peanut allergy?” said a cold yet feminine metallic voice.
My guess is that the population-ecology and parenting advice posts would score well controlling for quality and author recognition, but the B2B sales advice and Kirby fanfic would get downvoted unless there was some kind of thematic local twist. Only one way to find out!
I specifically said weakly dominant “in terms of protecting yourself”, and pointed out that the case for blue depends on a form of altruism (“the only reason to choose blue is to rescue the other people who chose blue”). Please exercise better reading comprehension.
I agree that the random-death variation would make it vastly easier to coordinate on blue. The reason the dilemma is so controversial is because the perception of what other people will or should do is so susceptible to framing effects. It’s a lot easier to coordinate on red in the blender variation.
I can see why you’d say that, but I don’t think it works because of the asymmetry where the safety of red is unconditional, which makes red weakly dominant in terms of protecting yourself.
To be sure, if you’re trying to coordinate on blue (to rescue the inevitable blue-pushers), then you might construe red-pushers as blameworthy for undermining blue coordination, but I don’t think “protecting yourself from other red-pushers” works as a casuistry for the blame because of the logic of dominance; the blame has to be about failing to protect others.
Yeah, it sucks! My strategy has basically just been to … unilaterally cover the asymmetric effort myself, on the theory that, well, the world doesn’t owe me anything; if I want people to understand things that they’re not interested in understanding, the only way to get my wish is to write so well and cover all the angles so thoroughly that it becomes more embarrassing for them to pretend not to understand. It’s not entirely ineffective, but comes at the cost of the prime years of my life. Sometimes I wonder if it’s a good use of my life, but it seems like an underprovided public good that I have a comparative advantage in. (Lots of people will write commercial software for money; not many people will do what I do out of religious fanaticism for the lost dream of rationality.)
The issue is that the only reason to choose blue is to rescue the other people who chose blue. If you know that some people will choose blue (children, people who didn’t think about the question very hard, &c.) and are confident that a majority will coordinate on rescuing them, fine. But in a situation where the stakes were real (such as the war march in the post, as contrasted to a Twitter poll with no real-world consequences), it would be harder to coordinate to rescue people who did something there was no other particular reason to do! That makes getting as many people as possible to save themselves and push red seem like a more attractive strategy.
I agree that it’s frustrating that people don’t read, but when I complain about that, I find that it’s tactically critical to specifically point to the part where I addressed their criticism. That is, I don’t just say, “I don’t think you read the post”, I say, “I don’t think you read the post, because if you had, you’d notice that I clearly addressed that in the paragraph starting with this-and-such.” That makes it more embarrassing for the critic who didn’t read, because it makes it legible to everyone that I’m not the one who’s bluffing.
that explains why SGD on overparameterized nets generalizes
Wait, I thought the singular learning theory stuff already did this part? (Just the “why SGD on overparameterized nets generalizes” part, not the “why particular architectural choices work” or “what particular features get learned” parts.) Neural networks being singular means that the parameter–function map is not a one-to-one correspondence, which means that simpler hypotheses (those that need fewer parameters to be specified or can correct “errors” in some parameters) occupy more volume in parameter-space and are easier for SGD to find first, such that training is implicitly doing a form of minimum-description-length program induction (with the learning coefficient being the measure of complexity rather than the parameter count). Is that too “qualitative” to count as an answer (because the architecture and feature prediction parts are the true test of knowledge)?
That weird futures market on the Anthropic IPO price (can’t find the link but saw it reference on Twitter a bunch)
You may be thinking of Ventuals. Best wishes, Less Wrong Reference Desk
Thanks! You’re correct about the standard usage. In “Assume Bad Faith”, I’m arguing that the standard conception of good faith as the norm relative to which bad faith can be detected and punished fails to carve reality at the joints, because a lot of things that are usually considered unintentional should be considered relevantly intentional in a functionalist sense. I wasn’t confused about the standard meaning of the term; I’m explicitly making a weird philosophical argument that the standard meaning embeds confusions about human psychology and the nature of rationality.
That’s not actually responding to the criticism. Rat culture could just be wrong!
I’ve talked and corresponded with Michael a lot over the last 17 years (not regularly during all that time, but pretty frequently during 2017–2020), and I don’t recall him ever saying anything about Mage: The Ascension. That doesn’t necessarily mean he’s never played—I never asked him about it—but it seems like some amount of absence of evidence undermining the “gets a lot of his material” claim.
I would strongly guess that you’re contributing to the phenomenon where gossip networks just make things up.
Rather, Zajko is biologically female, as reflected in other reporting.
If “Didicosm” evoked such emotion in you, be sure to also read “Death and the Gorgon”. (Commentary.)
I definitely wouldn’t call it “top rationality advice” because there are lots of other reasons to want to be liked and not want to be disliked, but I do expect the effect you describe to be real.
This isn’t obvious. What if people who like you tend to agree with your conclusions “as a favor” even if your arguments are bad?
President Donald Trump commented on Anthropic to CNBC’s Andrew Ross Sorkin on 21 April. The President said:
Anthropic is a group of very smart people, but they started telling our military how to operate, and we didn’t want that. They tend to be on the left, radical left, but we get along with them. In fact, they came to the office, they came to the White House a few days ago, and we had some very good talks with them, and I think they’re shaping up. They’re very smart, and I think they can be of great use. I like smart people; I like high IQ people. They definitely have high IQs. I think we’ll get along with them just fine.
If you want to bring people around to your viewpoint, make them like you.
This advice works equally well whether or not your viewpoint is true.
That’s right (blue is altruistically motivated, red is selfishly motivated), but I’m saying that the reason the other people are in danger of being killed is because they picked blue. If they had picked red, then they wouldn’t need altruists to make a risky choice in order to not kill them. That’s why I concede that, if you know some people will pick blue, it makes sense to want to coordinate on blue in order to rescue them. But if it were a switch that started in the red position rather than a pair of buttons, there would be no reason to be the first person to flick the switch to blue. (No one needed rescuing until you flicked your switch!)