I think that external deployment of AI systems is good for the world and so many policies that incentivize AI companies to only deploy internally are bad.
The majority of existential risk comes from AI systems that are internally deployed in AI companies, because of the standard story: early misaligned AGIs do a huge amount of AI R&D, build wildly superintelligent AIs, and then the superintelligences disempower humanity.
External deployment is much less existentially dangerous because (1) the externally deployed AIs won’t have access to huge amounts of compute to do AI R&D on and (2) they won’t be deliberately being deployed to do AI R&D.
Separately, there are many upsides to external deployment:
I think government/societal wakeup to AI is generally good, and the largest driver of wakeup is large effects of AI on society, which relies on external deployment.
There are a bunch of particular domains which might be very existentially helpful, e.g. AI for epistemics, AI for alignment research, AI for verification / trust, etc.
I am concerned that many policies people in the AI safety space are pushing for create an incentive for companies to not externally deploy their AI models (but that don’t push against internal deployment). Two salient examples are:
The EU AI Act classifies models with >1e25 FLOP as having “systemic risk” by default, and requires notification, evals risk assessment/mitigation, some cybersecurity, and various other requirements in order to be deployed in the EU market.
SB 53 requires that developers with >500M in annual gross revenue, before publicly deploying a frontier model (>1e26 FLOP), must publish a transparency report with catastrophic risk assessments, third part evaluator involvement, and evals results. This seems less burdensome than the EU AI act and so the argument would apply less strongly, but it carries some force.
That said, if you’re talking about Mythos in particular, I think the decision to wait to release was quite sensible.
It’s also unclear to me, the degree to which “AI Safety” has contributed to no-release policies. (Unclear, not in the sense I suspect not, I’m genuinely unclear).
At least my reaction to such announcements by labs has always been (exception. mythos) that it helps some and hurts some, but doesn’t help very much. By far the biggest dangers come from building the system in the first place, and after that, if you do internal deployment, you’re exposing to most of the extra risk you’d get by a full deployment.
I think that external deployment of AI systems is good for the world and so many policies that incentivize AI companies to only deploy internally are bad.
The majority of existential risk comes from AI systems that are internally deployed in AI companies, because of the standard story: early misaligned AGIs do a huge amount of AI R&D, build wildly superintelligent AIs, and then the superintelligences disempower humanity.
External deployment is much less existentially dangerous because (1) the externally deployed AIs won’t have access to huge amounts of compute to do AI R&D on and (2) they won’t be deliberately being deployed to do AI R&D.
Separately, there are many upsides to external deployment:
I think government/societal wakeup to AI is generally good, and the largest driver of wakeup is large effects of AI on society, which relies on external deployment.
There are a bunch of particular domains which might be very existentially helpful, e.g. AI for epistemics, AI for alignment research, AI for verification / trust, etc.
I am concerned that many policies people in the AI safety space are pushing for create an incentive for companies to not externally deploy their AI models (but that don’t push against internal deployment). Two salient examples are:
The EU AI Act classifies models with >1e25 FLOP as having “systemic risk” by default, and requires notification, evals risk assessment/mitigation, some cybersecurity, and various other requirements in order to be deployed in the EU market.
SB 53 requires that developers with >500M in annual gross revenue, before publicly deploying a frontier model (>1e26 FLOP), must publish a transparency report with catastrophic risk assessments, third part evaluator involvement, and evals results. This seems less burdensome than the EU AI act and so the argument would apply less strongly, but it carries some force.
Strong Agreement.
That said, if you’re talking about Mythos in particular, I think the decision to wait to release was quite sensible.
It’s also unclear to me, the degree to which “AI Safety” has contributed to no-release policies. (Unclear, not in the sense I suspect not, I’m genuinely unclear).
At least my reaction to such announcements by labs has always been (exception. mythos) that it helps some and hurts some, but doesn’t help very much. By far the biggest dangers come from building the system in the first place, and after that, if you do internal deployment, you’re exposing to most of the extra risk you’d get by a full deployment.