Because only a small number of people attended Part 1, we’re cancelling the in-person Part 2 and finishing drafting the voting guide online on Discord instead. If you want to help, join the Boston Less Wrong server at https://discord.com/invite/2N2ADpACkw, then find the thread at https://discord.com/channels/877713285099704371/1429532210515673129.
Taymon Beal
I think that either omitting the don’t-read-the-citations-aloud stage direction, or making it easier to follow (with a uniform italic-text-is-silent convention), would be fine, and I don’t have a strong opinion as to which is better. But before Boston made the change I’m now suggesting, what tended to happen was that people inconsistently read or didn’t read the citations aloud, and this was confusing and distracting.
This is good and I approve of it.
A few random notes and nitpicks:
I believe the first Petrov Day was in Boston in 2013, not 2014.
“More than 20 people”? 20 seems to me like far too many; I never do a table with more than 11. (If you have exactly 11 people you have to put them all at one table, because you need at least six to do it properly, because that’s how many Children there are at the end. But if I had 22 people I might split them into three groups rather than two; I haven’t yet had to actually decide this.)
Boston significantly reduced the incidence of people reading the quote citations out loud by putting them in italic text, just like the stage directions, and then including a uniform “don’t read italic text out loud” stage direction.
The version of the ceremony on the site includes the inaccurate account of the Arkhipov incident made up by Noam Chomsky. You can see Boston’s corrected-after-fact-checking version starting on page 30 of this doc.
I have also been repeatedly told that the story in the ceremony of the Black Death’s effect on human progress is wrong, but haven’t changed it because I don’t really understand what’s wrong with it and don’t have an alternative lined up.
Petrov received the Dresden Peace Prize, not the International Peace Prize, which was long defunct by 2013.
Hitler’s rise to power in Germany started in 1919 and was complete by 1934, so can’t really be said to have occurred “in 1939”. (I just replaced this with “in the 1920s”.)
I still think the gag of duplicating the “preserving knowledge required redundancy” section is hilarious and should be included :-P
Domain seems to have expired, so I bought it and got it working again.
(Epistemic status: Not fully baked. Posting this because I haven’t seen anyone else say it[1], and if I try to get it perfect I probably won’t manage to post it at all, but it’s likely that this is wrong in at least one important respect.)
For the past week or so I’ve been privately bemoaning to friends that the state of the discourse around IABIED (and therefore on the AI safety questions that it’s about) has seemed unusually cursed on all sides, with arguments going in circles and it being disappointingly hard to figure out what the key disagreements are and what I should believe conditional on what.
I think maybe one possible cause of this (not necessarily the most important) is that IABIED is sort of two different things: it’s a collection of arguments to be considered on the merits, and it’s an attempt to influence the global AI discourse in a particular object-level direction. It seems like people coming at it from these two perspectives are talking past each other, and specifically in ways that lead each side to question the other’s competence and good faith.
If you’re looking at IABIED as an argumentative disputation under rationalist debate norms, then it leaves a fair amount to be desired.[2] A number of key assumptions are at least arguably left implicit; you can argue that the arguments are clear enough, by some arbitrary standard, but it would have been better to make them even clearer. And while it’s not possible to address every possible counterargument, the book should try hard to address the smartest counterarguments to its position, not just those held by the greatest number of not-necessarily-informed people. People should not hesitate to point out these weaknesses, because poking holes in each others’ arguments is how we reach the truth. The worst part, though, is that when you point this out, proponents don’t eagerly accept feedback and try to modulate their messaging to point more precisely at the truth; instead, they argue that they should be held to a lower epistemic standard and/or that the hole-pokers should have a higher bar for hole-poking. This is really, really not a good look! If you behaved like that on LessWrong or the EA Forum, people would update some amount towards the proposition that you’re full of shit and they shouldn’t trust you. And since a published book is more formal and higher-exposure than a forum post, that means you should be more epistemically careful. Opponents are therefore liable to conclude that proponents have turned their brains off and are just doing tribal yelling, with a thin veneer of verbal sophistication applied on top for the sake of social convention.
If you’re looking at IABIED as an information op, then it’s doing a pretty good job balancing a significant and frankly kind of unfair number of constraints on what a book has to do and how it has to work. In particular, it bends extremely far over backwards to accommodate the notoriously nitpicky epistemic culture of rationalists and EAs, despite these not being the most important audiences. Further hedging is counterproductive, because in order to be useful, the book needs to make its point forcefully enough to overcome readers’ bias towards inaction. The world is in trouble because most actors really, really want to believe that the situation doesn’t require them to do anything costly. If you tell them a bunch of nuanced hedgey things, those biases will act on your message in their brains and turn it into something like “there’s a bunch of expert disagreement, we don’t know things for sure, but probably whatever you were going to do anyway is fine”. Note that this is not about “truth vs. propaganda”; basically every serious person agrees that some kind of costly action is or will be required, so if you say that the book overstates its case, or other things that people will predictably read as “the world’s not on fire”, they will thereby end up with a less accurate picture of the world, according to what you yourself believe. And yet opponents insist upon doing precisely this! If you actually believe that inaction is appropriate, then so be it, but we know perfectly well that most of you don’t believe that and are directionally supportive of making AI governance and policy more pro-safety. So saying things that will predictably soothe people further asleep is just a massive own-goal by your own values; there’s no rationalist virtue in speaking to the audience that you feel ought to exist instead of the one that actually does. Proponents are therefore liable to conclude that opponents either just don’t care about the real-world stakes, or are so dangerously naive as to be a liability to their own side.
- ^
Though it’s likely someone did and I just didn’t see it.
- ^
I’ve been traveling, haven’t made it all the way through the book yet, and am largely going by the reviews. I’m hoping to finish it this week, and if the book’s content turns out to be relevantly different from what I’m currently expecting, I’ll come back and post a correction.
- ^
We’re in the room now and can let people in.
You don’t think the GitHub thing is about reducing server load? That would be my guess.
This is addressed in the FAQ linked at the top of the page. TL;DR: The author insists that the gist of the story is true, but acknowledges that he glossed over a lot of intermediate debugging steps, including accounting for the return time.
Does that logic apply to crawlers that don’t try to post or vote, as in the public-opinion-research use case? The reason to block those is just that they drain your resources, so sophisticated measures to feed them fake data would be counterproductive.
I didn’t downvote (I’m just now seeing this for the first time), but the above comment left me confused about why you believe a number of things:
What methodology do you think MIRI used to ascertain that the Time piece was impactful, and why do you think that methodology isn’t vulnerable to bots or other kinds of attacks?
Why would social media platforms go to the trouble of feeding fake data to bots instead of just blocking them? What would they hope to gain thereby?
What does any of this have to do with the Social Science One incident?
In general, what’s your threat model? How are the intelligence agencies involved? What are they trying to do?
Who are you even arguing with? Is there a particular group of EAsphere people who you think are doing public opinion research in a way that doesn’t make sense?
Also, I think a lot of us don’t take claims like “I’ve been researching this matter professionally for years” seriously because they’re too vaguely worded; you might want to be a bit more specific about what kind of work you’ve done.
For people in Boston, I made a straw poll to gauge community sentiment on this question: https://forms.gle/5BJEG5fJWTza14eL9
I assume this is referring to the ancient fable “The Ant and the Grasshopper”, which is about what we would today call time preference. In the original, the high-time-preference grasshopper starves because it didn’t spend the summer stockpiling food for winter, while the low-time-preference ant survives because it did. Of course, alternate interpretations have been common since then.
Boston
Saturday, December 17; doors open at 6:30, Solstice starts at 7:15
69 Morrison Ave., Somerville, MA 02144RSVPs appreciated for planning purposes: https://www.facebook.com/events/3403227779922411
Let us know in advance if you need to park onsite (it’s accessible by public transportation). We’re up a flight of stairs.
As someone who was very unhappy with last year’s implementation and said so (though not in the public thread), I think this is an improvement and I’m happy to see it. In previous years, I didn’t get a code, but if I’d had one I would have very seriously considered using it; this year, I see no reason to do that.
I do think that, if real value gets destroyed as a result of this, then the ethical responsibility for that loss of value lies primarily with the LW team, and only secondarily with whoever actually pushed the button. So if the button got pushed and some other person were to say “whoever pushed the button destroyed a bunch of real value” then I wouldn’t necessarily quibble with that, but if the LW team said the same thing then I’d be annoyed.
So this wound up going poorly for me for various reasons. I ultimately ended up not doing the fast, and have been convinced that I’m not going to be able to in the future either, barring unanticipated changes in my mental-health situation. Other people are going to be in a different situation and that seems fine. But there are a couple community-level things that I feel ought to be expressed publicly somewhere, and this is where they’re apparently allowed, so:
First, it’s not a great situation if there are like three rationalist holidays and one of them is this dangerous/unhealthy for a substantial fraction of people (e.g., eating disorders, which appear to exist at a high rate in the ratsphere). As far as I can tell, nobody intended that outcome; the original Vavilov Day proposal was like 90% “individual thing to do for personal reasons”, 10% “new rationalist holiday”, and then commenters here and on social media seized on the 10% because we currently don’t have enough rationalist holidays and people are desperate for more. (This is why, e.g., the original suggestion that people propose alternative ways of honoring Vavilov didn’t get any traction; that wouldn’t have met the pent-up demand for more ritual as effectively, so there wasn’t interest.) But it meant that the choice was between “do something that’s maybe not at all a good idea for you” and “lose access to communal affirmation of shared values with no available substitute”. The idea here isn’t that there shouldn’t be anything this risky; it’s that something this risky should be one thing among many, and right now we aren’t there.
The counterpoint is that if we hold every new idea to a “good for the overall shape of the community” standard then defending ideas from critics becomes too unrewarding and we don’t get any new ideas at all. Bulldozer vs. vetocracy, except mediated by informal community attitudes rather than by any authority. This seems like a valid point to me and I don’t have any particularly helpful thoughts about how to navigate this tradeoff.
(It might have been possible to mitigate the tradeoff—assuming we wanted something like Vavilov Day to be a rationalist holiday at all, rather than an individual thing, which maybe we didn’t—by putting more overt focus on questions like “how should people decide whether this is good for them” and “how should people whom this isn’t good for relate to it”. But while these seem pretty non-costly to me, it might be the case that other people have different ideas for what non-costly precautions should be taken, and if you try to take all of them then it’s not non-costly anymore. Again, I don’t know.)
Second, I’ve heard from multiple sources that some people had concerns about the event but felt that they couldn’t express them in public. (You should take this claim with a grain of salt; not all of my knowledge here is firsthand, and even with respect to what is, since I’m not providing any details, you can’t trust that I haven’t omitted context that would lead you to a different conclusion if you knew it.) The resulting appearance of unanimity definitely left me feeling pretty unnerved and made it hard to tell whether I should participate. There are obvious reasons for people to refrain from public criticism—to the extent that it’s a personal thing, maybe we shouldn’t criticize people’s life choices, and to the extent that it’s a community thing, maybe we should err on the side of non-criticism in order to prevent chilling effects—and I don’t really have any useful thoughts about what to think or do about this. I’m not sure anyone should particularly do anything differently based on this information. But I’d feel remiss if I allowed it to just not exist in public at all.
(This wound up being mostly about the meta-level ritual/holiday stuff, but I’m posting it in this thread rather than the other one because I wanted to say something about the application of that meta-level stuff to this particular situation, rather than about how to build rationalist ritual/holidays in full generality. I’m basically in favor of the things being suggested in the other thread; my only serious worry is that nobody will actually do them, given that many of them have been suggested before.)
This strikes me as a purely semantic question regarding what goals are consistent with an agent qualifying as “friendly”.
Correction: The annual Petrov Day celebration in Boston has never used the button.
I’ve talked to some people who locked down pretty hard pretty early; I’m not confident in my understanding but this is what I currently believe.
I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.
I don’t think our community is “hyper-altruistic” in the Strangers Drowning sense, but we do put a lot of emphasis on being the kinds of people who are smart enough not to pick up pennies in front of steamrollers, and on not trusting the pronouncements of officials who aren’t incentivized to do sane cost-benefit analyses. And we apply that to altruism as much as anything else. So when a few people started coordinating an organized response, and used a mixture of self-preservation-y and moralize-y language to try to motivate people out of their secure-civilization-induced complacency, the community listened.
This doesn’t explain why not everyone eased up on restrictions once the epistemic Wild West of February and March gave way to the new normal later in the year. That seems more like a genuine failure on our part. I think I prefer Raemon’s explanation from this subthread: the concentrated attention that was required to make the initial response work turned out to be a limited resource, and it had been exhausted. By the time it replenished, there was no longer a Schelling event to coordinate around, and the problems no longer seemed so urgent to the people doing the coordinating.
Are you at some point going to do a postmortem of the “try to fix Solstice group singing” thing? IIUC this was an announced goal of this particular Solstice, and wound up somewhat overshadowed by the other stated goal, but I personally would be curious for more details of what exactly you think was wrong with group singing at previous Solstices, and what you were trying to do to fix it, and whether you think it succeeded.