Okay but they don’t have to be ruled out for you to say things like “One way this could work is bla bla bla” and have that be sane in the context of what already exists. Again if you think something is a near term concern it’s not unreasonable to make reference to how the existing things could evolve in the near future. I think what’s actually going on here is that rather than non-LLM agents being “not ruled out” (which I agree with, they are by no means ruled out) Yudkowsky and Soares find LLM agents an implausible architecture but don’t want to say this explicitly because they think saying that too loudly would speed up timelines. I think they’re actually wrong about the viability of LLM agents, but it does contribute to a sort of odd abstract tone it otherwise would have less of.
Okay but they don’t have to be ruled out for you to say things like “One way this could work is bla bla bla” and have that be sane in the context of what already exists. Again if you think something is a near term concern it’s not unreasonable to make reference to how the existing things could evolve in the near future. I think what’s actually going on here is that rather than non-LLM agents being “not ruled out” (which I agree with, they are by no means ruled out) Yudkowsky and Soares find LLM agents an implausible architecture but don’t want to say this explicitly because they think saying that too loudly would speed up timelines. I think they’re actually wrong about the viability of LLM agents, but it does contribute to a sort of odd abstract tone it otherwise would have less of.