Nate’s view here seems similar to “To do cutting-edge alignment research, you need to do enough self-reflection that you might go crazy”. This seems really wrong to me. (I’m not sure if he means all scientific breakthroughs require this kind of reflection, or if alignment research is special).
I don’t think many top scientists are crazy, especially not in a POUDA way. I don’t think top scientists have done a huge amount of self-reflection/philosophy.
On the other hand, my understanding is that some rationalists have driven themselves crazy via too much self-reflection in an effort to become more productive. Perhaps Nate is overfitting to this experience?
“Just do normal incremental science; don’t try to do something crazy” still seems like a good default strategy to me (especially for an AI).
Thanks for this write up; it was unusually clear/productive IMO.
(I’m worried this comment comes off as mean or reductive. I’m not trying to be. Sorry)
The idea that all cognitive labor will be automated in the near-future is a very controversial premise, not at all implied by the idea that AI will be useful for tutoring. I think that’s the disconnect here between Altman’s words and your interpretation.