What do you think “coordinate a pause” would have looked like in a pre-AI-race world? Most current pausers (most explicitly FLI) want to pause AI at current rate of capabilities, and even then only the kind of agentic AI capabilities research that tend toward building AGI/ASI, while supporting development of narrow Tool AI. But the person most associated with defending the Tool AI over Agentic AI position in early 2010s LW discourse (@HoldenKarnofsky) is also the one who definitely more than anyone else mentioned here is responsible for the AI race happening at all.
What do you think “coordinate a pause” would have looked like in a pre-AI-race world?
be more humble/uncertain about personally, organizationally, or civilizationally solving AI alignment/safety
talk about alignment/safety research potentially failing and having to coordinate a pause
don’t make plans/claims about unilaterally (i.e., without widespread agreement/consent) building a world-changing Friendly AI or Pivotal Task AI, thereby normalizing the idea and spreading it to others (and try to discourage such plans in others)
start/encourage LW discussions about politics, social activism/movements, large-scale coordination, in theory and practice
think ahead about and encourage discussions about potential strategies and obstacles to coordinating a pause
try to convince prominent AI researchers of AI x-risks and get them to spread the message to the public earlier
gather resources (people, funding, connections, etc.) and build institutions for a potential pause effort
depending on how these efforts go, do one or more of:
try to pre-emptively ban/regulate AI development
start an international safe AGI/ASI project while banning others
wait for an AI arms race to start and then oppose it from a well-prepared position
So, I think (haven’t read it in nearly a decade) Superintelligence did do 2 and 5 and push toward 8b has the ideal solution? MIRI and FLI did do 6 (e.g. Stuart Russell on the board, 2015 letter), 7, and eventually 8c obviously. I think that 8a being incoherent, and the main defenders of 1-3 being the ones to actually launch the AI race, was my main point here.
What do you think “coordinate a pause” would have looked like in a pre-AI-race world? Most current pausers (most explicitly FLI) want to pause AI at current rate of capabilities, and even then only the kind of agentic AI capabilities research that tend toward building AGI/ASI, while supporting development of narrow Tool AI. But the person most associated with defending the Tool AI over Agentic AI position in early 2010s LW discourse (@HoldenKarnofsky) is also the one who definitely more than anyone else mentioned here is responsible for the AI race happening at all.
Edit: I think @Eliezer Yudkowsky’s Six Dimensions of Operational Adequacy in AGI Projects, a 2017 MIRI internal document intended as a criticism of the Karnofsky-led OpenPhil/EA approach to AI labs, is, characteristically, the first expression of MIRI’s pro-pause stance. So my conclusion is that the pro-pause stance only really makes sense if there’s already an AI arms race to stand in opposition to. (Mind you, Tomasik talks only of fostering general international cooperation and expecting lower AI risk as a positive flow-through effect.)
be more humble/uncertain about personally, organizationally, or civilizationally solving AI alignment/safety
talk about alignment/safety research potentially failing and having to coordinate a pause
don’t make plans/claims about unilaterally (i.e., without widespread agreement/consent) building a world-changing Friendly AI or Pivotal Task AI, thereby normalizing the idea and spreading it to others (and try to discourage such plans in others)
start/encourage LW discussions about politics, social activism/movements, large-scale coordination, in theory and practice
think ahead about and encourage discussions about potential strategies and obstacles to coordinating a pause
try to convince prominent AI researchers of AI x-risks and get them to spread the message to the public earlier
gather resources (people, funding, connections, etc.) and build institutions for a potential pause effort
depending on how these efforts go, do one or more of:
try to pre-emptively ban/regulate AI development
start an international safe AGI/ASI project while banning others
wait for an AI arms race to start and then oppose it from a well-prepared position
So, I think (haven’t read it in nearly a decade) Superintelligence did do 2 and 5 and push toward 8b has the ideal solution? MIRI and FLI did do 6 (e.g. Stuart Russell on the board, 2015 letter), 7, and eventually 8c obviously. I think that 8a being incoherent, and the main defenders of 1-3 being the ones to actually launch the AI race, was my main point here.