but they’re not agents in the same way as the models in the thought experiments, even if they’re more agentic. The base-level thing they do is not “optimise for a goal”. We need to be thinking in terms of models that are shaped like the ones we actually have instead of holding on to old theories so hard we instantiate them in reality
Let me put it another way—do you expect that “LLMs do not optimize for a goal” will still be a valid objection in 2030? If yes, then I guess we have a very different idea of how progress will go.
but they’re not agents in the same way as the models in the thought experiments, even if they’re more agentic. The base-level thing they do is not “optimise for a goal”. We need to be thinking in terms of models that are shaped like the ones we actually have instead of holding on to old theories so hard we instantiate them in reality
Let me put it another way—do you expect that “LLMs do not optimize for a goal” will still be a valid objection in 2030? If yes, then I guess we have a very different idea of how progress will go.
Yeah, we do.