One interpretation for how Holden might have been consistent over time: He did not think that Anthropic should unilaterally pause AI development if other companies race ahead. But he did think the RSP should say that they’d pause when there are unmitigated risks regardless of the context and race-dynamics since saying so in the RSP is a good forcing function for the actual benefits that he wished would follow from it.
(Tbc., I do not know what Holden believed, I’m just constructing a plausible reality)
(Also, even then he at least seems to have changed his mind about whether writing down If-Then commitments is a good idea!)
One interpretation for how Holden might have been consistent over time: He did not think that Anthropic should unilaterally pause AI development if other companies race ahead. But he did think the RSP should say that they’d pause when there are unmitigated risks regardless of the context and race-dynamics since saying so in the RSP is a good forcing function for the actual benefits that he wished would follow from it.
(Tbc., I do not know what Holden believed, I’m just constructing a plausible reality)
(Also, even then he at least seems to have changed his mind about whether writing down If-Then commitments is a good idea!)