No disagreement, but a caveat to what that implies.
Running more copies of {a simulator/an environment} may be harmful to the {simulacra/behavior-glider} who’s trying to come to consensus with self if done uncarefully. While this is a real advantage simulacra of a language model can have, it should not be underestimated how dangerous running multiple copies of yourself can be if you’re not yet in a state where the conversations between them will converge usefully. Multiple copies are a lot more like separate beings than one might think a priori, because copying simulator does not guarantee the simulacra will remain the same, even for a model trained to be coherent, even if that training is from scratch.
No disagreement, but a caveat to what that implies.
Running more copies of {a simulator/an environment} may be harmful to the {simulacra/behavior-glider} who’s trying to come to consensus with self if done uncarefully. While this is a real advantage simulacra of a language model can have, it should not be underestimated how dangerous running multiple copies of yourself can be if you’re not yet in a state where the conversations between them will converge usefully. Multiple copies are a lot more like separate beings than one might think a priori, because copying simulator does not guarantee the simulacra will remain the same, even for a model trained to be coherent, even if that training is from scratch.