A: Yeah. I’m mostly positive about their goal to work towards “building the Pause button”. I think protesting against “relatively responsible companies” makes a lot of sense when these companies seem to use their lobbying power more against AI-Safety-aligned Governance than in favor of it. You’re obviously very aware of the details here.
B: I asked my question because I’m frustrated with that. Is there a way for AI Safety to coordinate a better reaction?
C:
There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.
I phrased that a bit sharply, but I find your reply very useful:
Obviously P(doom | no slowdown) < 1. Many people’s work reduces risk in both slowdown and no-slowdown worlds, and it seems pretty clear to me that most of them shouldn’t switch to working on increasing P(slowdown).[1]
These are quite strong claims! I’ll take that as somewhat representative of the community. My attempt at paraphrasing: It’s not (strictly?) necessary to slow down AI to prevent doom. There is a lot of useful AI Safety work going on that is not focused on slowing/pausing AI. This work is useful even if AGI is coming soon.
Thank you for responding!
A: Yeah. I’m mostly positive about their goal to work towards “building the Pause button”. I think protesting against “relatively responsible companies” makes a lot of sense when these companies seem to use their lobbying power more against AI-Safety-aligned Governance than in favor of it. You’re obviously very aware of the details here.
B: I asked my question because I’m frustrated with that. Is there a way for AI Safety to coordinate a better reaction?
C:
I phrased that a bit sharply, but I find your reply very useful:
These are quite strong claims! I’ll take that as somewhat representative of the community. My attempt at paraphrasing: It’s not (strictly?) necessary to slow down AI to prevent doom. There is a lot of useful AI Safety work going on that is not focused on slowing/pausing AI. This work is useful even if AGI is coming soon.
Saying “PauseAI good” does not take a lot of an AI Safety researcher’s time.