Mod note: this post violates our LLM Writing Policy for LessWrong and was incorrectly approved, so I have delisted the post to make it only accessible via link. I’ve not returned it to your drafts, because that would make the comments hard to access.
David, please don’t post more direct LLM output, or we’ll remove your posting permissions.
Curated. Conceptually building on The Rise of Parasitic AI seems worth doing. It’s a potentially important phenomenon that may end up playing a big part in how the coming century plays out. It’s reminiscent of this section of Christiano’s “What Failure Looks Like”. Exploring the extent to which we can bring an existing and mature discipline’s concepts and models to bear on the phenomenon is a great approach.
I appreciate caching out that process in terms of what predictions we should make if the approach makes sense. I think it is unlikely that this particular approach ends up being very fruitful, but only because every conceptual approach to a new kind of problem is unlikely to end up being very fruitful.
I hope you continue to try finding plausible ways to apply the concepts and models from successful, mature disciplines to bear on the sorts of problems we tend to care about around here.