Dario Amodei: “Now, I’m not at all an advocate of like, “Stop the technology. Pause the technology.” I think for a number of reasons, I think it’s just not possible. We have geopolitical adversaries; they’re not going to not make the technology, the amount of money… I mean, if you even propose even the slightest amount of… I have, and I have many trillions of dollars of capital lined up against me for whom that’s not in their interest. So, that shows the limits of what is possible and what is not.”
If we’re in a pessimistic scenario [in which “AI safety is an essentially unsolvable problem – it’s simply an empirical fact that we cannot control or dictate values to a system that’s broadly more intellectually capable than ourselves – and so we must not develop or deploy very advanced AI systems”]… Anthropic’s role will be to provide as much evidence as possible that AI safety techniques cannot prevent serious or catastrophic safety risks from advanced AI, and to sound the alarm so that the world’s institutions can channel collective effort towards preventing the development of dangerous AIs. If we’re in a “near-pessimistic” scenario, this could instead involve channeling our collective efforts towards AI safety research and halting AI progress in the meantime. Indications that we are in a pessimistic or near-pessimistic scenario may be sudden and hard to spot. We should therefore always act under the assumption that we still may be in such a scenario unless we have sufficient evidence that we are not.
So Anthropically has has specifically written that we may need to halt AI progress and prevent the development of dangerous AIs, and now we have Dario saying that he is not at all an advocate of pausing the technology, and even even is going so far as to say that it’s not possible to pause it.
In the same post, Anthropic wrote “It’s worth noting that the most pessimistic scenarios might look like optimistic scenarios up until very powerful AI systems are created. Taking pessimistic scenarios seriously requires humility and caution in evaluating evidence that systems are safe.”
It doesn’t seem like Dario is doing what Anthropic wrote we should do: “We should therefore always act under the assumption that we still may be in such a [pessimistic] scenario unless we have sufficient evidence that we are not.” We clearly don’t have sufficient evidence that we are not in such a situation, especially since “the most pessimistic scenarios might look like optimistic scenarios up until very powerful AI systems are created.”
Anthropic has a March 2023 blog post “Core Views on AI Safety: When, Why, What, and How” that says:
So Anthropically has has specifically written that we may need to halt AI progress and prevent the development of dangerous AIs, and now we have Dario saying that he is not at all an advocate of pausing the technology, and even even is going so far as to say that it’s not possible to pause it.
In the same post, Anthropic wrote “It’s worth noting that the most pessimistic scenarios might look like optimistic scenarios up until very powerful AI systems are created. Taking pessimistic scenarios seriously requires humility and caution in evaluating evidence that systems are safe.”
It doesn’t seem like Dario is doing what Anthropic wrote we should do: “We should therefore always act under the assumption that we still may be in such a [pessimistic] scenario unless we have sufficient evidence that we are not.” We clearly don’t have sufficient evidence that we are not in such a situation, especially since “the most pessimistic scenarios might look like optimistic scenarios up until very powerful AI systems are created.”