Before jumping into critique, the good: - Kudos to Ben Pace for seeking out and actively engaging with contrary viewpoints - The outline of the x-risk argument and history of the AI safety movement seem generally factually accurate
The author of the article makes quite a few claims about the details of PauseAI’s proposal, its political implications, the motivations of its members and leaders...all without actually joining the public Discord server, participating in the open Q&A new member welcome meetings (I know this because I host them), or even showing evidence of spending more than 10 minutes on the website. All of these basic research opportunities were readily available and would have taken far less time than spent on writing the article. This tells you everything you need to know about the author’s integrity, motivations, and trustworthiness.
That said, the article raises an important question: “buy time for what?” The short answer is: “the real value of a Pause is the coordination we get along the way.” Something as big as an international treaty doesn’t just drop out of the sky because some powerful force emerged and made it happen against everyone else’s will. Think about the end goal and work backwards:
1) An international treaty requires 2) Provisions for monitoring and enforcement, 3) Negotiated between nations, 4) Each of whom genuinely buys in to the underlying need 5) And is politically capable of acting on that interest because it represents the interests of their constituents 6) Because the general public understands AI and its implications enough to care about it 7) And feels empowered to express that concern through an accessible democratic process 8) And is correct in this sense of empowerment because their interests are not overridden by Big Tech lobbying 9) Or distracted into incoherence by internal divisions and polarization
An organization like PauseAI can only have one “banner” ask (1), but (2-9) are instrumentally necessary—and if those were in place, I don’t think it’s at all unreasonable to assume society would be in a better position to navigate AI risk.
Side note: my objection to the term “doomer” is that it implies a belief that humanity will fail to coordinate, solve alignment in time, or be saved by any other means, and thus will actually be killed off by AI—which seems like it deserves a distinct category from those who simply believe that the risk of extinction by default is real.
Before jumping into critique, the good:
- Kudos to Ben Pace for seeking out and actively engaging with contrary viewpoints
- The outline of the x-risk argument and history of the AI safety movement seem generally factually accurate
The author of the article makes quite a few claims about the details of PauseAI’s proposal, its political implications, the motivations of its members and leaders...all without actually joining the public Discord server, participating in the open Q&A new member welcome meetings (I know this because I host them), or even showing evidence of spending more than 10 minutes on the website. All of these basic research opportunities were readily available and would have taken far less time than spent on writing the article. This tells you everything you need to know about the author’s integrity, motivations, and trustworthiness.
That said, the article raises an important question: “buy time for what?” The short answer is: “the real value of a Pause is the coordination we get along the way.” Something as big as an international treaty doesn’t just drop out of the sky because some powerful force emerged and made it happen against everyone else’s will. Think about the end goal and work backwards:
1) An international treaty requires
2) Provisions for monitoring and enforcement,
3) Negotiated between nations,
4) Each of whom genuinely buys in to the underlying need
5) And is politically capable of acting on that interest because it represents the interests of their constituents
6) Because the general public understands AI and its implications enough to care about it
7) And feels empowered to express that concern through an accessible democratic process
8) And is correct in this sense of empowerment because their interests are not overridden by Big Tech lobbying
9) Or distracted into incoherence by internal divisions and polarization
An organization like PauseAI can only have one “banner” ask (1), but (2-9) are instrumentally necessary—and if those were in place, I don’t think it’s at all unreasonable to assume society would be in a better position to navigate AI risk.
Side note: my objection to the term “doomer” is that it implies a belief that humanity will fail to coordinate, solve alignment in time, or be saved by any other means, and thus will actually be killed off by AI—which seems like it deserves a distinct category from those who simply believe that the risk of extinction by default is real.