I want to flag that thinking you have a representation that could be used in principle to do the right thing is not the same thing as believing it will “Just Work”. If you do a naive RL process on neural embeddings or LLMs evaluators you will definitely get bad results. I do not believe in “alignment by default” and push back on such things frequently whenever they’re brought up. What has happened is that the problem has gone from “not clear how you would do this even in principle, basically literally impossible with current knowledge” to merely tricky.
I want to flag that thinking you have a representation that could be used in principle to do the right thing is not the same thing as believing it will “Just Work”. If you do a naive RL process on neural embeddings or LLMs evaluators you will definitely get bad results. I do not believe in “alignment by default” and push back on such things frequently whenever they’re brought up. What has happened is that the problem has gone from “not clear how you would do this even in principle, basically literally impossible with current knowledge” to merely tricky.