LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon
This post inspired a pretty longrunning trail of thought for me I am still chewing on. I have considered pivoting my life to pursue the sort of vision this post articulates. I haven’t actually done it because other things so far have seemed more urgent/tractable but I still think it’s pretty important.
Okay I think I don’t stand by my previous statement. More like, I expect that overall process to be a lot more expensive than just going for a big protest off the bat. Obviously yeah there’s a more common pattern of escalating groundwork and smaller protests.
But, I think this is dramatically more expensive, to the point where it doesn’t seem worth my time, in a way that just going straight for the big protest does.
I don’t really have that much confident it’s possible to get a big protest off the bat. But, I think there is a discrete step-change in “you got the AI safety folk to all show up once” and “you got a substantial fraction of mainstream support.” Once you’re trying to do the latter, the SF benefit just seems very low to me.
The mechanism by which I’d try to hit 10k numbers involves starting from scratch recruiting a lot of people, at which point I might as well just start in DC. A crux is that I expect a 100k protest to involve similar amounts of work as a 10k protest, and requires calling in favors from famous people that are very expensive and I don’t want to have to call in twice
(I also not your Russia example starts with a 50k-100k protest, which is already a different league)
Some reasons I’m more bullish on “just go for a big protest right off the bat.”
First, I just know that I’d be happy to show up for one protest, but feel pre-emptively exhausted at the idea of showing up for multiple. It feels like an easier ask to say “look we know this is costly but we’re actually going to try to do this once, and not make repeated asks.”
Relatedly, there’s a lot of other major political stuff going on these days, trying to compete in that arena seems pretty hard. People have tons of outrage fatigue. It feels promising to distinguish yourself on “we’re not trying to become a thing that will keep demanding your attention.”
Long-running “stop AI” protest seems very likely to bleed into general anti-tech sentiment and end up saying a lot of conflationary political claims that politicians will rightly discount, and is more likely to be discount, and will be more polarized.
We can reduce uncertainty on whether a major protest will work, in a way that constraints the downside risk a lot, in a way that’s hard for a long running multi-protest movement building thingy, so we only spend the effort if it looks like it’s going to work.
Gotcha. Was the game one real for you? (I guess I’m looking for things that will show up in my day job, and trying to get a sense of whether people have different day-jobs than me, or doing random side projects, or what)
The test-coverage one is interesting.
Yeah I get the principle, but, like, what in practice do you do where this is useful? Like concrete (even if slightly abstracted) examples of things you did with it.
This comment led me to realize there really needed to be a whole separate post just focused on “fluent cruxfinding”, since that’s a necessary step in Fluent Cruxy Predictions, and, it’s the step that’s more likely to immediately pay off.
Here’s that post:
Finding Cruxes: Help Reality Punch You In the Face
What sort of things do you solve with this? I feel like when I have a problem that’s not fairly easy for an AI to solve straightforwardly, if I sent it on a loop it’d just do a bunch of random crazy shit that was clearly not the right solution.
I can imagine a bunch of scaffolding that helps but don’t it seems like most of the work is in the problem specification and I’m not sure if I don’t have the sort of problems that benefit from this or if skill issue.
I think 10k ones approximately won’t do anything.
The thing I saw the sentence as doing is mostly clarifying “We’re not naive, obviously just doing the naive thing here would not work, that’s why we’re not asking for it.” (I think I agree that a US ban would be some-kind-of-useful but it feels way less politically viable to me, since it feels more like throwing away the lead for no reason to most people. I realize it may sound weird to think “banning in one country less viable than banning worldwide”, but, I think the ban worldwide actually makes clearly makes sense in a way that banning l locally only maybe makes sense if you tune the parameters just right)
“ban AI development temporarily near but not after max-controllable AI”
I’m not sure I’m parsing the grammar here, wondering if you flipped the sign or I’m misreading. (it sounds like “AIs that are almost uncontrollable are banned, uncontrollably powerful AIs are allowed)
Yeah when I was doing the graphics I considered a version where everyone was waving stop signs. It looked a bit weird as an illustration but I suspect would probably work in real life.
Yeah it feels more like a paramilitary army which is not the point.
(By contrast I do think “Stop Sign” is pretty good)
I think these look kinda scary, in particular the black/red. White/red does feel more reasonable, although basically the original March page was constrained by the aesthetic of the book cover, and by the time there’s serious effort invested in this I’d expect the aesthetic is going to get an overhaul that isn’t as tied to the book.
I think the effect there would be pretty minimal, there’s not more than a few thousand people in any given city that likely to show up. It’d be weird to ask 90k people to travel to San Francisco, since that doesn’t send a particular message about international treaties. (You might run a different protest that is the “please AI companies, unilaterally stop”, but, I don’t actually think that protest makes much sense)
I gave this a +4 because it feels pretty important for the “how to develop a good intellectual” question. I’d give it a +9 if it was better argued.
I think it’s generally the case that patterns need to survive to be good, but also it’s fairly normal for patterns to die and this to be kinda fine. (i.e. it’s fine if feudalism lasts a few hundred years and then is outcompeted by other stuff).
The application to superintelligence does seem, like, true and special, but, probably most patterns that evolved naturally don’t successfully navigate superintelligence well and I’m not sure it’s the right standard for them.
Yeah I think (not speaking for MIRI) that the FAQ should rephrased so the vibe is more “here’s what we believe, but, there’s a bunch of reasons you might want to support this.”
> It’s not useful for only one country to ban advancement of AI capabilities within its own borders.
This seems to imply that the US government could not on its own significantly decrease p(doom).
I think my personal beliefs would say “it’s not very useful” or something. I think the “ban AGI locally” plan is dependent on a pretty specific path to be useful and I don’t read the current phrasing as ruling out “One country Bans it and also does some other stuff in conjunction.” (actually, upon reflection I’m not that confident I know what sort of scenario you have in mind here)
(again not MIRI, just sharing my own models and understanding)
The whole point is to send a message to DC people, and by the time we’re talking about hitting 100,000 I don’t think being in the Bay Area helps that much.
I already added it. [/ozymandias]
Yeah. On my end I’m like “well, this will be among the more important things I do that month/year, I expect to just actually be able to prioritize it over other existing plans.”
I do think it’d be hypothetically nice to have a “80% likely to attend”, or “I’mma make a good faith effort to attend” button, but it amps up the complexity of the page a bunch. (Realistically expect most people who are not rationalists* in practice who sign up to mean something more like this)
For now I’d just say “click the notify me” option, which I think is still a useful signal.
* or other flavors of “take their word abnormally seriously”
That has a Review:
https://www.lesswrong.com/posts/8ZR3xsWb6TdvmL8kx/optimistic-assumptions-longterm-planning-and-cope#rvqkHKyvXNxgvzYWN