(Why “Top 3” instead of “literally the top priority?” Well, I do think a successful AGI lab also needs have top-quality researchers, and other forms of operational excellence beyond the ones this post focuses on. You only get one top priority, )
I think the situation is more dire than this post suggests, mostly because “You only get one top priority.” If your top priority is anything other than this kind of organizational adequacy, it will take precedence too often; if your top priority is organizational adequacy, you probably can’t get off the ground.
The best distillation of my understanding regarding why “second priority” is basically the same as “not a priority at all” is this twitter thread by Dan Luu.
The fear was that if they said that they needed to ship fast and improve reliability, reliability would be used as an excuse to not ship quickly and needing to ship quickly would be used as an excuse for poor reliability and they’d achieve none of their goals.
I had an insight about the implications of NAH which I believe is useful to communicate if true and useful to dispel if false; I don’t think it has been explicitly mentioned before.
One of Eliezer’s examples is “The AI must be able to make a cellularly identical but not molecularly identical duplicate of a strawberry.” One of the difficulties is explaining to the AI what that means. This is a problem with communicating across different ontologies—the AI sees the world completely differently than we do. If NAH in a strong sense is true, then this problem goes away on its own as capabilities increase; that is, AGI will understand us when we communicate something that has a coherent natural interpretation, even without extra effort on our part to translate it to the AGI version of machine code.
Does that seem right?