One thing I should maybe emphasize which my above comment maybe doesn’t make clear enough is that “GPTs do imitation learning, which is safe” and “we should do bounded optimization rather than unbounded optimization” are two independent, mostly-unrelated points. More on the latter point is coming up in a post I’m writing, whereas more of my former point is available in links like this.
One thing I should maybe emphasize which my above comment maybe doesn’t make clear enough is that “GPTs do imitation learning, which is safe” and “we should do bounded optimization rather than unbounded optimization” are two independent, mostly-unrelated points. More on the latter point is coming up in a post I’m writing, whereas more of my former point is available in links like this.