First, this is a minor point where you’re wrong, but it’s also a sufficiently obvious point that it should hopefully make clear how wrong your world model is: AI safety community in general, and DeepMind + Anthropic + OpenAI in particular, have all made your job FAR easier. This should be extremely obvious upon reflection, so I’d like you to ask yourself how on earth you ever thought otherwise. CEOs of leading AI companies publicly acknowledging AI risk has been absolutely massive for public awareness of AI risk and its credibility. You regularly bring up how CEOs of leading AI companies acknowledge AI risk as a talking point, so I’d hope that on some level you’re aware that your success in public advocacy would be massively reduced in the counterfactual case where the leading AI orgs are Google Brain, Meta, and NVIDIA, and their leaders were saying “AI risk? Sounds like sci-fi nonsense!”
The fact that people disagree with your preferred method of reducing AI risk does not mean that they are EVIL LIARS who are MAKING YOUR JOB HARDER and DOOMING US ALL.
I disagree this is obviously wrong. I think you are not considering the correct counterfactual. From Connor L.’s point of view, the guys at the AI labs are genuinely worried about existential risk, but run 4D chess algorithms to determine they have to send mixed signals about it. Since Connor thinks these decisions run counter to the goal, counterfactually they are making his life harder by not just stating their worry and its consequence clearly. The counterfactual is not with “if AI labs did not exist”. That said, I’m not so confident I understand what he’s thinking, but you are excluding a reasonable possibility and so it’s not <obvious> as you say.
Overall, I think your comment is one of those cases where you indulge in the same sin you want to point out. See e.g. your overconfident epilogue.
But I don’t think that would be a sensible position. The correct counterfactual is in fact the one where Google Brain, Meta, and NVIDIA led the field. Like, if DM + OpenAI + Anthropic didn’t exist—something he has publicly wished for—that is in fact the most likely situation we would find. We certainly wouldn’t find CEOs who advocate for a total stop on AI.
I disagree this is obviously wrong. I think you are not considering the correct counterfactual. From Connor L.’s point of view, the guys at the AI labs are genuinely worried about existential risk, but run 4D chess algorithms to determine they have to send mixed signals about it. Since Connor thinks these decisions run counter to the goal, counterfactually they are making his life harder by not just stating their worry and its consequence clearly. The counterfactual is not with “if AI labs did not exist”. That said, I’m not so confident I understand what he’s thinking, but you are excluding a reasonable possibility and so it’s not <obvious> as you say.
Overall, I think your comment is one of those cases where you indulge in the same sin you want to point out. See e.g. your overconfident epilogue.
Yeah, fair enough.
But I don’t think that would be a sensible position. The correct counterfactual is in fact the one where Google Brain, Meta, and NVIDIA led the field. Like, if DM + OpenAI + Anthropic didn’t exist—something he has publicly wished for—that is in fact the most likely situation we would find. We certainly wouldn’t find CEOs who advocate for a total stop on AI.