“Employees at OpenAI believed…” — do you mean Sam Altman and the board?
If this information is accurate, it speaks volumes about how flawed their alignment predictions might also be. If a company with vast resources and insider access like OpenAI can’t predict the capabilities of competing firms (a relatively simple problem with objectively knowable answers), how can we expect them to predict the behavior of advanced AI models, where the unknowns are far greater and often unknowable?
“Employees at OpenAI believed…” — do you mean Sam Altman and the board?
If this information is accurate, it speaks volumes about how flawed their alignment predictions might also be. If a company with vast resources and insider access like OpenAI can’t predict the capabilities of competing firms (a relatively simple problem with objectively knowable answers), how can we expect them to predict the behavior of advanced AI models, where the unknowns are far greater and often unknowable?