I’m not sure what you are imagining as ‘the field’, but isn’t it closer to twenty years old? (Both numbers are, of course, much less than the age of the AI field, or of computer science more broadly.)
Much of the source of my worry is that I think in the first ten-twenty years of work on safety, we mostly got impossibility and difficulty results, and so “let’s just try and maybe it’ll be easy” seems inconsistent with our experience so far.
Well, the AI technical safety work that’s appropriate for neural networks is about 5-6 years old, if we go back before 2017 I don’t think any relevant work was done
AlexNet dates back to 2012, I don’t think previous work on AI can be compared to modern statistical AI. Paul Christiano’s foundational paper on RLHF dates back to 2017. Arguably, all of agent foundations work turned out to be useless so far, so prosaic alignment work may be what Roko is taking as the beginning of AIS as a field.
I’m not sure what you are imagining as ‘the field’, but isn’t it closer to twenty years old? (Both numbers are, of course, much less than the age of the AI field, or of computer science more broadly.)
Much of the source of my worry is that I think in the first ten-twenty years of work on safety, we mostly got impossibility and difficulty results, and so “let’s just try and maybe it’ll be easy” seems inconsistent with our experience so far.
Well, the AI technical safety work that’s appropriate for neural networks is about 5-6 years old, if we go back before 2017 I don’t think any relevant work was done
AlexNet dates back to 2012, I don’t think previous work on AI can be compared to modern statistical AI.
Paul Christiano’s foundational paper on RLHF dates back to 2017.
Arguably, all of agent foundations work turned out to be useless so far, so prosaic alignment work may be what Roko is taking as the beginning of AIS as a field.
yes
When were convnets invented, again? How about backpropagation?