Why are we modeling the leading labs as (1) having a legible, unambiguous lead (i.e. they all agree internally that there’s an N month lead), (2) being willing to spend down their lead at all?
My whole understanding of the “spending down the lead” argument was always between countries. If it’s now set as being between companies I don’t see any reason to think realistically this would happen?
I wasn’t trying to assume in the post that leading AI companies have a legible, unambiguous and are willing to spend down this lead. I was just noting that my “Plan C” proposal requires that companies have a lead they are willing to spend down (which might not be legible or unambiguous, though I do think ambiguity gets less bad as AIs get more capable and their impacts are more clear).
I do think a lab being willing to spend down depends on there being concensus among lab leadership that there is such a lead with a high degree of confidence. For example, given that Plan C requires that one of the current frontier labs pulls ahead, this would mean that it also would need to pull ahead by enough of a margin where their leadership agrees that they definitely are ahead.
Concretely, it seems to me like the “1-3 month lead” worlds are likely to collapse into Plan D. It’s also plausible to me that the “amount of margin the leading lab would need before they’d be willing to spend any on safety” is very high, and that they wouldn’t “spend down to zero”, so in practice you would need one lab, soon, to start pulling very far ahead.
Note this is somewhat minor, I found the overall post and similar posts very useful!
Why are we modeling the leading labs as (1) having a legible, unambiguous lead (i.e. they all agree internally that there’s an N month lead), (2) being willing to spend down their lead at all?
My whole understanding of the “spending down the lead” argument was always between countries. If it’s now set as being between companies I don’t see any reason to think realistically this would happen?
I wasn’t trying to assume in the post that leading AI companies have a legible, unambiguous and are willing to spend down this lead. I was just noting that my “Plan C” proposal requires that companies have a lead they are willing to spend down (which might not be legible or unambiguous, though I do think ambiguity gets less bad as AIs get more capable and their impacts are more clear).
I do think a lab being willing to spend down depends on there being concensus among lab leadership that there is such a lead with a high degree of confidence. For example, given that Plan C requires that one of the current frontier labs pulls ahead, this would mean that it also would need to pull ahead by enough of a margin where their leadership agrees that they definitely are ahead.
Concretely, it seems to me like the “1-3 month lead” worlds are likely to collapse into Plan D. It’s also plausible to me that the “amount of margin the leading lab would need before they’d be willing to spend any on safety” is very high, and that they wouldn’t “spend down to zero”, so in practice you would need one lab, soon, to start pulling very far ahead.
Note this is somewhat minor, I found the overall post and similar posts very useful!