Our competitors/other parties are doing dangerous things? Maybe we could coordinate and share our concerns and research with them
What probability do you put that, if Anthropic had really tried, they could have meaningfully coordinated with Openai and Google? Mine is pretty low
I think many of these are predicated on the belief that it would be plausible to get everyone to pause now. In my opinion this is extremely hard and pretty unlikely to happen. I think that, even in worlds where actors continue to race, there are actions we can take to lower the probability of x-risk, and it is a reasonable position to do so.
I separately think that many of the actions you describe historically were dumb/harmful, but are equally consistent with “25% of safety people act like this” and 100%
What probability do you put that, if Anthropic had really tried, they could have meaningfully coordinated with Openai and Google? Mine is pretty low
Not GP but I’d guess maybe 10%. Seems worth it to try. IMO what they should do is hire a team of top negotiators to work full-time on making deals with other AI companies to coordinate and slow down the race.
ETA: What I’m really trying to say is I’m concerned Anthropic (or some other company) would put in a half-assed effort to cooperate and then give up, when what they should do is Try Harder. “Hire a team to work on it full time” is one idea for what Trying Harder might look like.
Fair. My probability is more like 1-2%. I do think that having a team of professional negotiators seems a reasonable suggestion though. I predict the Anthropic position would be that this is really hard to achieve in general, but that if slowing down was ever achieved we would need much stronger evidence of safety issues. In addition to all the commercial pressure, slowing down now could be considered to violate antitrust law. And it seems way harder to get all the other actors like Meta or DeepSeek or xAI on board, meaning I don’t even know if I think it’s good for some of the leading actors to unilaterally slow things down now (I predict mildly net good, but with massive uncertainty and downsides)
What probability do you put that, if Anthropic had really tried, they could have meaningfully coordinated with Openai and Google? Mine is pretty low
I think many of these are predicated on the belief that it would be plausible to get everyone to pause now. In my opinion this is extremely hard and pretty unlikely to happen. I think that, even in worlds where actors continue to race, there are actions we can take to lower the probability of x-risk, and it is a reasonable position to do so.
I separately think that many of the actions you describe historically were dumb/harmful, but are equally consistent with “25% of safety people act like this” and 100%
Not GP but I’d guess maybe 10%. Seems worth it to try. IMO what they should do is hire a team of top negotiators to work full-time on making deals with other AI companies to coordinate and slow down the race.
ETA: What I’m really trying to say is I’m concerned Anthropic (or some other company) would put in a half-assed effort to cooperate and then give up, when what they should do is Try Harder. “Hire a team to work on it full time” is one idea for what Trying Harder might look like.
Fair. My probability is more like 1-2%. I do think that having a team of professional negotiators seems a reasonable suggestion though. I predict the Anthropic position would be that this is really hard to achieve in general, but that if slowing down was ever achieved we would need much stronger evidence of safety issues. In addition to all the commercial pressure, slowing down now could be considered to violate antitrust law. And it seems way harder to get all the other actors like Meta or DeepSeek or xAI on board, meaning I don’t even know if I think it’s good for some of the leading actors to unilaterally slow things down now (I predict mildly net good, but with massive uncertainty and downsides)