As someone who was very unhappy with last year’s implementation and said so (though not in the public thread), I think this is an improvement and I’m happy to see it. In previous years, I didn’t get a code, but if I’d had one I would have very seriously considered using it; this year, I see no reason to do that.
I do think that, if real value gets destroyed as a result of this, then the ethical responsibility for that loss of value lies primarily with the LW team, and only secondarily with whoever actually pushed the button. So if the button got pushed and some other person were to say “whoever pushed the button destroyed a bunch of real value” then I wouldn’t necessarily quibble with that, but if the LW team said the same thing then I’d be annoyed.
So this wound up going poorly for me for various reasons. I ultimately ended up not doing the fast, and have been convinced that I’m not going to be able to in the future either, barring unanticipated changes in my mental-health situation. Other people are going to be in a different situation and that seems fine. But there are a couple community-level things that I feel ought to be expressed publicly somewhere, and this is where they’re apparently allowed, so:
First, it’s not a great situation if there are like three rationalist holidays and one of them is this dangerous/unhealthy for a substantial fraction of people (e.g., eating disorders, which appear to exist at a high rate in the ratsphere). As far as I can tell, nobody intended that outcome; the original Vavilov Day proposal was like 90% “individual thing to do for personal reasons”, 10% “new rationalist holiday”, and then commenters here and on social media seized on the 10% because we currently don’t have enough rationalist holidays and people are desperate for more. (This is why, e.g., the original suggestion that people propose alternative ways of honoring Vavilov didn’t get any traction; that wouldn’t have met the pent-up demand for more ritual as effectively, so there wasn’t interest.) But it meant that the choice was between “do something that’s maybe not at all a good idea for you” and “lose access to communal affirmation of shared values with no available substitute”. The idea here isn’t that there shouldn’t be anything this risky; it’s that something this risky should be one thing among many, and right now we aren’t there.
The counterpoint is that if we hold every new idea to a “good for the overall shape of the community” standard then defending ideas from critics becomes too unrewarding and we don’t get any new ideas at all. Bulldozer vs. vetocracy, except mediated by informal community attitudes rather than by any authority. This seems like a valid point to me and I don’t have any particularly helpful thoughts about how to navigate this tradeoff.
(It might have been possible to mitigate the tradeoff—assuming we wanted something like Vavilov Day to be a rationalist holiday at all, rather than an individual thing, which maybe we didn’t—by putting more overt focus on questions like “how should people decide whether this is good for them” and “how should people whom this isn’t good for relate to it”. But while these seem pretty non-costly to me, it might be the case that other people have different ideas for what non-costly precautions should be taken, and if you try to take all of them then it’s not non-costly anymore. Again, I don’t know.)
Second, I’ve heard from multiple sources that some people had concerns about the event but felt that they couldn’t express them in public. (You should take this claim with a grain of salt; not all of my knowledge here is firsthand, and even with respect to what is, since I’m not providing any details, you can’t trust that I haven’t omitted context that would lead you to a different conclusion if you knew it.) The resulting appearance of unanimity definitely left me feeling pretty unnerved and made it hard to tell whether I should participate. There are obvious reasons for people to refrain from public criticism—to the extent that it’s a personal thing, maybe we shouldn’t criticize people’s life choices, and to the extent that it’s a community thing, maybe we should err on the side of non-criticism in order to prevent chilling effects—and I don’t really have any useful thoughts about what to think or do about this. I’m not sure anyone should particularly do anything differently based on this information. But I’d feel remiss if I allowed it to just not exist in public at all.
(This wound up being mostly about the meta-level ritual/holiday stuff, but I’m posting it in this thread rather than the other one because I wanted to say something about the application of that meta-level stuff to this particular situation, rather than about how to build rationalist ritual/holidays in full generality. I’m basically in favor of the things being suggested in the other thread; my only serious worry is that nobody will actually do them, given that many of them have been suggested before.)
This strikes me as a purely semantic question regarding what goals are consistent with an agent qualifying as “friendly”.
He tweeted his approval.
Correction: The annual Petrov Day celebration in Boston has never used the button.
I’ve talked to some people who locked down pretty hard pretty early; I’m not confident in my understanding but this is what I currently believe.
I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.
I don’t think our community is “hyper-altruistic” in the Strangers Drowning sense, but we do put a lot of emphasis on being the kinds of people who are smart enough not to pick up pennies in front of steamrollers, and on not trusting the pronouncements of officials who aren’t incentivized to do sane cost-benefit analyses. And we apply that to altruism as much as anything else. So when a few people started coordinating an organized response, and used a mixture of self-preservation-y and moralize-y language to try to motivate people out of their secure-civilization-induced complacency, the community listened.
This doesn’t explain why not everyone eased up on restrictions once the epistemic Wild West of February and March gave way to the new normal later in the year. That seems more like a genuine failure on our part. I think I prefer Raemon’s explanation from this subthread: the concentrated attention that was required to make the initial response work turned out to be a limited resource, and it had been exhausted. By the time it replenished, there was no longer a Schelling event to coordinate around, and the problems no longer seemed so urgent to the people doing the coordinating.
Docker is not a security boundary.
Eh, if you read the raw results most are pretty innocuous.
Not at the scale that would be required to power the entire grid that way. At least, not yet. This is of course just one study (h/t Vox via Robert Wiblin) but provides at least a rough picture of the scale of the problem.
I feel obligated to link to my house’s Petrov Day “Bad/X-risk Future” candle.
Cross-posting from Facebook:
Any policy goal that is obviously part of BLM’s platform, or that you can convince me is, counts. Police reform is the obvious one but I’m open to other possibilities.
It’s fine for “heretics” to make suggestions, at least here on LW where they’re somewhat less likely to attract unwanted attention. Efficacy is the thing I’m interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.
Small/limited returns are okay if they’re the best that can be done. Time preference is moderately high (because that matches my assessment of the BLM moral framework) but still limited.
Suggestions from non-Americans are fine.
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.
I don’t have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community’s trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization’s core competencies. I’ve reached the point where I no longer find even gross failures of this kind surprising.
(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)
The organizer wound up posting their own event: https://www.lesswrong.com/events/ndqcNdvDRkqZSYGj6/ssc-meetups-everywhere-1
This looks like a duplicate.
Nit: I think this game is more standardly referred to in the literature as the “traveler’s dilemma” (Google seems to return no relevant hits for “almost free lunches” apart from this post).
Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren’t suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurring many politically-motivated clever arguments about Deep Philosophical Issues. Somewhere during that process somebody came up with scientific anti-realism, and it gained traction because it was politically workable as a compromise position, being sufficiently nonthreatening to both sides that they were content to let it be. Except for Galileo, who thought it was bullshit and refused to play along, which (in conjunction with his general penchant for pissing people off, plus the political environment having changed since Copernicus due to the Counter-Reformation) got him locked up.
Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.
This essay argues against the idea of “saving the phenomenon”, and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of “it doesn’t matter if it’s real or not” comes across as suspiciously modern.