I think this is a decently engaging story, but it sounds like a Claude story, not a Tomás B story. Ending is too happy, technology is allowed to be good, and there are no themes of the protagonist being complicit in a system they find abhorrent. Also “the protagonist of these stories in my context window goes to therapy and resolves their internal tensions” is the most Claude it is possible for a story to be.
I would be sad if you stopped writing stories because other humans could write stories that are of similar quality by some metrics, and I will also be sad if you stop writing because AI can write fiction which is good in different ways to the ways your fiction is good.
Humans are hilariously bad at wilderness survival in the absence of societal knowledge and support. The support doesn’t need to be 21st-century-shaped but we do need both physical and social technology to survive and reproduce reliably.
That doesn’t matter much, though, because humans live in an environment which contains human civilization. The “holes” in our capabilities don’t come up very often.
The right tools could also paper over many of the deficiencies of LLM agents. I don’t expect the tools which make groups of LLM agents able to collectively do impressive things to result in particularly human-shaped agents though.
Concretely, sample efficiency is very important if you want a human-like agent that can learn on the job in a reasonable amount of time. It’s much less important if you can train once on how to complete each task with a standardized set of tools, and then copy the trained narrow system around as needed.
(Note: perhaps I should say “language-capable agent” rather than “llm-based agent”)