2. Google didn’t necessarily even break a commitment? The commitment mentioned in the article is to “publicly report model or system capabilities.” That doesn’t say it has to be done at the time of public deployment.
This document linked on the open letter page gives a precise breakdown of exactly what the commitments were and how Google broke them (both in spirit and by the letter).[1] The summary is this:
Google violated the spirit of commitment I by publishing its first safety report almost a month after public availability and not mentioning external testing in their initial report.
Google explicitly violated commitment VIII by not stating whether governments are involved in safety testing, even after being asked directly by reporters.
But in fact the letter actually understates the degree to which Google DeepMind violated the commitments. The real story from this article is that GDM confirmed to Time that they didn’t provide any pre-deployment access to UK AISI:
However, Google says it only shared the model with the U.K. AI Security Institute after Gemini 2.5 Pro was released on March 25.
If UK AISI doesn’t have pre-deployment access, a large portion of their whole raison d’être is nullified.
Assess the risks posed by their frontier models or systems across the AI lifecycle, including before deploying that model or system… They should also consider results from internal and external evaluations as appropriate, such as by independent third-party evaluators, their home governments, and other bodies their governments deem appropriate.
And if they didn’t give pre-deployment access to UK AISI, it’s a fairly safe bet they didn’t provide pre-deployment access to any other external evaluator.
The violation is also explained, although less clearly, in the Time article:
The update also stated the use of “third-party external testers,” but did not disclose which ones or whether the U.K. AI Security Institute had been among them—which the letter also cites as a violation of Google’s pledge.
After previously failing to address a media request for comment on whether it had shared Gemini 2.5 Pro with governments for safety testing...
Thanks. Sorry for criticizing without reading everything. I agree that, like, on balance, GDM didn’t fully comply with the Seoul commitments re Gemini 2.5. Maybe I just don’t care much about these particular commitments.
This document linked on the open letter page gives a precise breakdown of exactly what the commitments were and how Google broke them (both in spirit and by the letter).[1] The summary is this:
But in fact the letter actually understates the degree to which Google DeepMind violated the commitments. The real story from this article is that GDM confirmed to Time that they didn’t provide any pre-deployment access to UK AISI:
If UK AISI doesn’t have pre-deployment access, a large portion of their whole raison d’être is nullified.
Google withholding access is quite strongly violating the spirit of commitment I of the Frontier AI Safety Commitments:
And if they didn’t give pre-deployment access to UK AISI, it’s a fairly safe bet they didn’t provide pre-deployment access to any other external evaluator.
The violation is also explained, although less clearly, in the Time article:
Thanks. Sorry for criticizing without reading everything. I agree that, like, on balance, GDM didn’t fully comply with the Seoul commitments re Gemini 2.5. Maybe I just don’t care much about these particular commitments.