Putting aside the general question, is OpenAI good for the world, I want to consider the smaller question, how do OpenAI’s demonstrations of scaled up versions of current models affect AI safety?
I think there’s a much easier answer to this. Any risks we face from scaling up models we already have with funding much less than tens of billions of dollars amounts to unexploded uranium sitting around, that we’re refining in microgram quantities. The absolute worst that can happen with connectionist architectures is that we solve all the hard problems without having done the trivial scaled-up variants, and therefore scaling up is trivial, and so that final step to superhuman AI also becomes trivial.
Even if scaling up ahead of time results in slightly faster progress towards AGI, it seems that it at least makes it easier to see what’s coming, as incremental improvements require research and thought, not just trivial quantities of dollars.
Going back to the general question, one good I see OpenAI producing is the normalization of the conversation around AI safety. It is important for authority figures to be talking about long-term outcomes, and in order to be an authority figure, you need a shiny demo. It’s not obvious how a company could be more authoritative than OpenAI while being less novel.
To the question, how do OpenAI’s demonstrations of scaled up versions of current models affect AI safety?, I don’t think much changes? It does seem that OpenAI is aiming to go beyond simple scaling, which seems much riskier.
As to the general question, certainly that news makes me more worried about the state of things. I know way too little about the decision to be more concrete than that.
Putting aside the general question, is OpenAI good for the world, I want to consider the smaller question, how do OpenAI’s demonstrations of scaled up versions of current models affect AI safety?
I think there’s a much easier answer to this. Any risks we face from scaling up models we already have with funding much less than tens of billions of dollars amounts to unexploded uranium sitting around, that we’re refining in microgram quantities. The absolute worst that can happen with connectionist architectures is that we solve all the hard problems without having done the trivial scaled-up variants, and therefore scaling up is trivial, and so that final step to superhuman AI also becomes trivial.
Even if scaling up ahead of time results in slightly faster progress towards AGI, it seems that it at least makes it easier to see what’s coming, as incremental improvements require research and thought, not just trivial quantities of dollars.
Going back to the general question, one good I see OpenAI producing is the normalization of the conversation around AI safety. It is important for authority figures to be talking about long-term outcomes, and in order to be an authority figure, you need a shiny demo. It’s not obvious how a company could be more authoritative than OpenAI while being less novel.
Post OpenAI exodus update: does the exit of Dario Amodei, Chris Olah, Jack Clarke and potentially others from OpenAI make you change your opinion?
To the question, how do OpenAI’s demonstrations of scaled up versions of current models affect AI safety?, I don’t think much changes? It does seem that OpenAI is aiming to go beyond simple scaling, which seems much riskier.
As to the general question, certainly that news makes me more worried about the state of things. I know way too little about the decision to be more concrete than that.