Catching the Eye of Sauron

(edit 7/​24/​2023: Certain sections of this post I no longer endorse, but the central dilemma of the Eye remains)

The decision to reach out to the broad public isn’t—or shouldn’t—be one that comes lightly. However, once you are actively vying for the Eye of Sauron—writing in TIME, appearing on highly visible/​viral podcasts, getting mentioned in white house press briefings, spending time answering questions from twitter randos, and admitting you have no promising research directions by way of partially explaining why all this public-facing work is happening - you are no longer catering exclusively to a select subset of the population, and your actions should reflect that.

You are, whether you like it or not, engaged in memetic warfare—and recent events/​information make me think this battle isn’t being given proper thought.

Perhaps this wasn’t super intentional, and after now having poked the bear MIRI may realize this isn’t in their best interest. But surely it’s better to either be (1) completely avoiding the Eye of Sauron and not concerned with public facing memetics at all, or (2) committed to thorough, strategic, and effective memetic warfare. Instead, we are wandering around in this weird middle ground where, for example, Eliezer feels like X hours are well spent arguing with randos.

If we are to engage in memetics, low hanging fruit are abound, and being ignored:

  • Refusing to engage billionaires on twitter—especially ones that are sufficiently open to being convinced that they will drop $44 billion for something as pedestrian as a social media company.

  • Not even attempting to convince other high leverage targets

  • Relying on old blogposts and 1-1 textual arguments instead of much more viral (and scalable!) mediums like video

  • Not updating what high visibility (video, aggregated text, etc) instances of our arguments which do exist, to meet the AI skeptics where they are at. (I’m not saying your actual model of the fundamental threat necessarily needs updating)

  • Not attempting to generate or operationalize large bounties that would catch the attention of every smart person on the planet. Every seriously smart high schooler knows about the Millennium Prize Problems, and their reward is just $1 million. A pittance! You also don’t have to convince these masses about our entire alignment worldview; split up and operationalize the problems appropriately and people will want to solve them even if they disagree with doom! (non-seriously-thought-through example: either solve alignment problem X or convince committee Y that X isn’t a problem)

  • Relying on a “bottom-up” media strategy whereby the community is responsible for organizing and creating said media

  • Not attempting aggregation

  • Not aiming for the very attainable goal of getting just the relatively small idea of ainotkilleveryoneism (need a better name; more memetics) into the general population (not the entire corpus of arguments!) to the same degree that global warming is. You are effectively running a PR campaign, but the vast majority of people do not know that there is this tiny fervent subset of serious people that think literally every single person will die within the next 0-50 years, in an inescapable way, which is distinct from other commonly known apocalypses such as nuclear war or global warming. The AI that kills even a billion people is not the hypothesis under consideration, and that detail is something that can and should fit within the transmissibly general ainotkilleveryonism meme.

  • During podcasts/​interviews, abdicating responsibility for directing the conversation—and covering the foundations of the doom world model—onto the interviewers. (The instances I have in mind being the recent Bankless and Lex Fridman podcasts—I may provide timestamped links later. But to paraphrase, Eliezer basically says at the end of both: “Oh well we didn’t cover more than 10% of what we probably should have. ¯\_(ツ)_/​¯ ” )

  • Assuming we want more dignity, answers to the question “what can we do” should not be met with (effectively; not a quote) “there is no hope”. If that’s actually a 4D chess move whereby the intended response is something like “oh shit he sounds serious, let me look into this”, surely you can just short-circuit that rhetoric straight into an answer like “take this seriously and go into research, donate, etc”—even if you don’t think that is going to work. (We are doomers after all—but come to think of it maybe it’s not good memetically for us to self identify like that). Even if you draw analogies to other problems that would require unprecedented mass coordinated efforts to solve—how is giving up dying with dignity?

  • Stepping right into a well known stereotype by wearing a fedora. Yes this has nothing to do with the arguments, but when your goal is effective memetics it does in fact matter. Reality doesn’t care that we justifiably feel affronted about this.

I want to be clear, Eliezer is one person who has already done more than I think could be expected of most people. But I feel like he may need a RIGBY of his own here, since many of the most powerful memetic actions to take would best be performed by him.

In any case, the entirety of our future should not be left to his actions and what he decides to do. Where are the other adults in the room who could connect these simple dots and pluck these low hanging fruit? You don’t need to export ALL of the corpus of reasons or even the core foundations of why we are concerned about AI—the meme of ainotkilleveryoneism is the bare minimum that needs to be in everyone’s heads as a serious possibility considered by serious people.

Further, completely dropping the ball on memetics like this makes me concerned that what we non-insiders see being done… is all that is being done. That there aren’t truly weird, off-the-wall, secret-by-necessity things being tried. 4 hours ago, I would have bet everything I own that Eliezer was at least attempting extensive conversations with the heads of AI labs, but given that apparently isn’t happening, what else might not be?

(Edit: I meant to hit this point more directly: Eliezer in his podcast with Dwarkesh Patel also said that he has tried “very, very hard” to find a replacement for himself—or just more high quality alignment researchers in general. I’m not questioning the effort/​labor involved in writing the sequences, fiction, arbital, or research in general—and everything but maybe the fiction should still have been done in all timelines in which we win—but to think producing the sequences and advanced research is the best way to cast your net seems insane. There are millions of smart kids entering various fields, with perhaps thousands of them potentially smart enough for alignment. How many people—of near-high-enough IQ/​capability—do you think have read the sequences? Less than 100? )

(Edit 2: Another benefit of casting the net wider/​more-effectively: even if you don’t find the other 100-1000 Eliezers out there, think about what is currently happening in alignment/​AI discourse: we alignment-pilled semi-lurkers who can argue the core points—if not contribute to research—are outnumbered and not taken seriously. What if we 10-1000x our number? And by cultural suffusion we reach a point where ainotkilleveryoneism no longer sounds like a crazy idea coming out of nowhere? For example, high visibility researchers like Yann LeCun are actively avoiding any conversation with us, comparing the task to debating creationists. But he’ll talk to Stuart Russell :\ )