somehow I never saw, so thanks for writing and linking it!
Thanks!
I agree that those hints are important to keep paying attention to! But to me steganography isn’t a sign of autonomous agency in and of itself; not until we know for sure, somehow, that they’re passing coherent messages from on LLM to another, rather than those messages being the just the most eye-catching samples from the extreme tails.
It’s a sign of it simply because it’s something I expect to see more often in worlds where LLMs have autonomous agency vs worlds where LLMs do not (yet) have autonomous agency. I agree it isn’t that much evidence in and of itself for agency.
I do agree with your point on models being psychologically similar, I’ve tried to explain some of this myself. But that hypothesis is independent of the agency one.
I think it is approximately correct to presume that LLM chatbots may be sentient and that we can’t tell for sure they’re not or when they’ll start being in any clean way, but also, it is “more” correct so far to presume that current chatbots are not sentient given how much of their sentient behavior is predicated on the user prompts themselves “triggering” it.
Sure, I don’t get annoyed when people doubt LLM sentience. It’s labeling it as delusional that I specifically take issue with!
Yes, psychiatrics researcher Søren Østergaard did in August 2023 in advance of seeing any cases!
...