If you’re not already doing machine learning research and engineering, I think it takes more than two years of study to reach the frontier? (The ordinary software engineering you use to build Less Wrong, and the futurism/alignment theory we do here, are not the same skills.)
Yeah, to be clear, I think I would try hard to hire some people with more of the relevant domain-knowledge (trading off against some other stuff). I do think I also somewhat object to it taking such a long time to get the relevant domain-knowledge (a good chunk of people involved in GPT-3 had less than two years of ML experience), but it doesn’t feel super cruxy for anything here, I think?
“newer domain, therefore less is known about how to get anything to work at all.”
To be clear, I agree with this, but I think this mostly pushes towards making me think that small teams with high general competence will be more important than domain-knowledge. But maybe you meant something else by this.
Yeah, to be clear, I think I would try hard to hire some people with more of the relevant domain-knowledge (trading off against some other stuff). I do think I also somewhat object to it taking such a long time to get the relevant domain-knowledge (a good chunk of people involved in GPT-3 had less than two years of ML experience), but it doesn’t feel super cruxy for anything here, I think?
To be clear, I agree with this, but I think this mostly pushes towards making me think that small teams with high general competence will be more important than domain-knowledge. But maybe you meant something else by this.