Not saying that “Sydney is unsafe” is a legitimate criticism, but I doubt OpenAI is run by people with the personal agency or social capital to make any of those decisions. Leadership is following the script for “successful technology company”, and none of those things you mentioned look like things that are in that script.
pretty sure Microsoft is strong enough to do this entirely on their own. check out some of their work on language models, eg GODEL and unilm. all you need is scale!
They could have
Not developed it for Microsoft.
Developed it for Microsoft, but insisted on proper safety.
Not signed up for whatever deal would allow Microsoft to force it to not do one or two without sufficient alignment checks.
Not saying that “Sydney is unsafe” is a legitimate criticism, but I doubt OpenAI is run by people with the personal agency or social capital to make any of those decisions. Leadership is following the script for “successful technology company”, and none of those things you mentioned look like things that are in that script.
pretty sure Microsoft is strong enough to do this entirely on their own. check out some of their work on language models, eg GODEL and unilm. all you need is scale!