I think deploying Claude 3 was fine and most AI safety people are confused about the effects of deploying frontier-ish models. I haven’t seen anyone else articulate my position recently so I’d probably be down to dialogue. Or maybe I should start by writing a post.
[Edit: this comment got lots of agree-votes and “Deploying Claude 3 increased AI risk” got lots of disagreement so maybe actually everyone agrees it was fine.]
I would probably be up for dialoguing. I don’t think deploying Claude 3 was that dangerous, though I think that’s only because the reported benchmark results were misleading (if the gap was as large as advertised it would be dangerous).
I think Anthropic overall has caused a lot of harm by being one of the primary drivers of an AI capabilities arms-race, and by putting really heavily distorting incentives on a large fraction of the AI Safety and AI governance ecosystem, but Claude 3 doesn’t seem that much like a major driver of either of these (on the margin).
Unsure how much we disagree Zach and Oliver so I’ll try to quantify: I would guess that Claude 3 will cut release date of next gen models from OpenAI by a few months at least (I would guess 3 months), which has significant effects on timelines.
Tentatively, I’m thinking that this effect may be surlinear. My model is that each new release increases the speed of development (bc of increased investment in all the value chain including compute + realization from people that it’s not like other technologies etc) and so that a few months now causes more than a few months on AGI timelines.
I’m interested to know why you think that. I’ve not thought about it a ton so I don’t think I’d be a great dialogue partner, but I’d be willing to give it a try, or you could give an initial bulleted outline of your reasoning here.
I think deploying Claude 3 was fine and most AI safety people are confused about the effects of deploying frontier-ish models. I haven’t seen anyone else articulate my position recently so I’d probably be down to dialogue. Or maybe I should start by writing a post.
[Edit: this comment got lots of agree-votes and “Deploying Claude 3 increased AI risk” got lots of disagreement so maybe actually everyone agrees it was fine.]
I would probably be up for dialoguing. I don’t think deploying Claude 3 was that dangerous, though I think that’s only because the reported benchmark results were misleading (if the gap was as large as advertised it would be dangerous).
I think Anthropic overall has caused a lot of harm by being one of the primary drivers of an AI capabilities arms-race, and by putting really heavily distorting incentives on a large fraction of the AI Safety and AI governance ecosystem, but Claude 3 doesn’t seem that much like a major driver of either of these (on the margin).
Unsure how much we disagree Zach and Oliver so I’ll try to quantify: I would guess that Claude 3 will cut release date of next gen models from OpenAI by a few months at least (I would guess 3 months), which has significant effects on timelines.
Tentatively, I’m thinking that this effect may be surlinear. My model is that each new release increases the speed of development (bc of increased investment in all the value chain including compute + realization from people that it’s not like other technologies etc) and so that a few months now causes more than a few months on AGI timelines.
I’m interested to know why you think that. I’ve not thought about it a ton so I don’t think I’d be a great dialogue partner, but I’d be willing to give it a try, or you could give an initial bulleted outline of your reasoning here.