OpenAI’s continued practice of publishing the blueprints allowing others to create more powerful models seems to undermine their claims that they are worried about “bad actors getting there first”.
If you were a scientist working on the Manhattan project because you were worried about Hitler getting the atomic bomb first, you wouldn’t send your research on centrifuge design to german research scientists. Yet every company that claims they are more likely than other groups to create safe AGI continues to publish the blueprints for creating AGI to the open web.
Is there any actual justification for this other than “The prestige of getting published in top journals makes us look impressive?”
Makes you wonder who is developing secret AGI as we speak. One might assume that there is 10x more secret research (and researchers?) than meets the eye
OpenAI’s continued practice of publishing the blueprints allowing others to create more powerful models seems to undermine their claims that they are worried about “bad actors getting there first”.
If you were a scientist working on the Manhattan project because you were worried about Hitler getting the atomic bomb first, you wouldn’t send your research on centrifuge design to german research scientists. Yet every company that claims they are more likely than other groups to create safe AGI continues to publish the blueprints for creating AGI to the open web.
Is there any actual justification for this other than “The prestige of getting published in top journals makes us look impressive?”
Makes you wonder who is developing secret AGI as we speak. One might assume that there is 10x more secret research (and researchers?) than meets the eye