Erik has already pointed out some problems in the math, but also
Formal definition: Same as for the attractor sequence, but for a positive Lyapunov coefficient.
I’m not sure this feels right. For the attractor sequence, it makes sense to think of the last part of the sequence as the attractor, that to which is arrived, and to think of the “structural properties incentivizing attraction” lying there. On the contrary, it would seem like the “structural properties incentivizing chaos” should be found at the start of the sequence (after which different paths wildly diverge), instead of in one of such divergent endings. Intuitively it seems like a sequence should be chaotic just when its Lyapunov exponent is high.
On another note, I wonder whether such a conceptualization of language generation as a dynamical system can be fruitful even for natural, non-AI linguistics.
Erik has already pointed out some problems in the math, but also
I’m not sure this feels right. For the attractor sequence, it makes sense to think of the last part of the sequence as the attractor, that to which is arrived, and to think of the “structural properties incentivizing attraction” lying there. On the contrary, it would seem like the “structural properties incentivizing chaos” should be found at the start of the sequence (after which different paths wildly diverge), instead of in one of such divergent endings. Intuitively it seems like a sequence should be chaotic just when its Lyapunov exponent is high.
On another note, I wonder whether such a conceptualization of language generation as a dynamical system can be fruitful even for natural, non-AI linguistics.