Not sure if this page is broken or I’m technically inept, but I can’t figure out how to reply to qualiia’s comment directly:
Primarily #5 and #7 was my gut reaction, but quailia’s post articulates rationale better than I could.
One useful piece of information that would influence my weights: what was OAI’s general hiring criteria? If they sought solely “best and brightest” on technical skills and enticed talent primarily with premiere pay packages, I’d lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #5/7 and higher on others. I read the external blog post about the bulk of OAI compensation being in PPUs, but that’s not necessarily incompatible with mission fit.
Well done on the list overall, seems pretty complete, though aphyer provides a good unique reason (albeit adjacent to #2).
“Want” seems ill-defined in this discussion. To the extent it is defined in the OP, it seems to be “able to pursue long-term goals”, at which point tautologies are inevitable. The discussion gives me strong stochastic parrot / “it’s just predicting next tokens not really thinking” vibes, where want/think are je ne sais quoi words to describe the human experience and provide comfort (or at least a shorthand explanation) for why LLMs aren’t exhibiting advanced human behaviors. I have little doubt many are trying to optimize for long-term planning and that AI systems will exhibit increasingly better long-term planning capabilities over time, but have no confidence whether that will coincide with increases in “want”, mainly because I don’t know what that means. Just my $0.02, as someone with no technical or linguistics background.