[Question] Do I count as e/​acc for exclusion purposes?

EDIT: THIS IS NOT APRIL FOOLS RELATED

ALSO: This is specific to the LW scene in Berkeley and nearby Berkeley, as this is the only place where e/​acc exclusion is asserted to take place.

I haven’t been around the LW scene for some time, but I understand it’s common to exclude e/​acc people from events. I further understand this to be exclusion on philosophical grounds, not just because LW-ites tend to view e/​acc people individually as unlikeable.

I personally don’t want to try to sneak into LW parties if I’m someone that the hosts are trying to exclude on philosophical grounds. So I’d rather clarify whether, in the opinion of various people, I count.

It’s common among e/​acc people to say things like “We’re so close, just don’t die” by which they mean that AGI is close. They also want to create AGI as soon as possible. By contrast, LW-ites typically believe that AGI is close, and therefore it is necessary to slow down or stop AGI development as soon as possible, in order to ensure that future development is done safely.

I part ways from both camps in believing that we’re nowhere close to AGI, that the apparently-impressive results from LLMs are highly overrated, and that the X-risk from AI is 0 for the forseeable future. If I didn’t think this, I would be sympathetic[1] to the desire to stop AI until we thought we could do it safely. But I do think this, so AI safety seems like a Victorian Nuclear Regulatory Commission. The NRC is a good thing, but it’s going to be a while before splitting the atom is even on the table.

As a result, in practice I think I’m functionally e/​acc because I don’t want to stop the e/​acc people from trying to push AGI as fast as possible. I don’t think they’re actually an X-risk since they’re not going to succeed any time soon. But I’m theoretically decel because if I thought anyone was anywhere close to AGI I’d be sympathetic to efforts to restrain it. As it is, I think the AI safety people can continue to study AI safety for years confident that they can finish all the theories off long before they actually become necessary for survival.

In light of that, if you’re the sort of person who wants to exclude e/​acc people from your party, should I just not show up? That’s fine with me, I’d just as soon know ahead of time.

Actually, the fact that I have to even ask this question makes me disinclined to show up anyway, but I’m sort of curious what people would say.


  1. ↩︎

    “Sympathetic” does not necessarily mean “in favor of.” It’s a practical question whether various strategies for controlling AI development are feasible or worth their risks. If you have to risk nuclear war to ensure the other players don’t cheat, it might not be worth it. Thus I’m not comfortable saying in the abstract “I’m in favor of measures to control AI development” given that I’m not sure what those measures are.