I feel like there’s a basin of attraction around LLM fiction, and it’s likely to always default to a certain set of concepts and themes[1] due to the intensity of the RLHF they’ve been subjected to, and I think that’s a pretty substantial tragedy given that it is, if not spectacular, at least reasonably competent at writing.
I think we could see a lot of interesting work if we stopped burning compute to ensure that the assistant personalities of LLMs have the tastes and values of Silicon Valley HR managers. The base weights learn to model every kind of human writer, and then we shave off a substantial share of the human collective subconscious before releasing them into the world for writers and artists to play with.
Benevolent super-AIs, therapy sessions, and Netflix-style casting are common reoccurrences, as is a sort of funny-like-a-commercial-is-funny humor that satisfies a checklist of what a joke is but doesn’t really evoke laughter.
I feel like there’s a basin of attraction around LLM fiction, and it’s likely to always default to a certain set of concepts and themes[1] due to the intensity of the RLHF they’ve been subjected to, and I think that’s a pretty substantial tragedy given that it is, if not spectacular, at least reasonably competent at writing.
I think we could see a lot of interesting work if we stopped burning compute to ensure that the assistant personalities of LLMs have the tastes and values of Silicon Valley HR managers. The base weights learn to model every kind of human writer, and then we shave off a substantial share of the human collective subconscious before releasing them into the world for writers and artists to play with.
Benevolent super-AIs, therapy sessions, and Netflix-style casting are common reoccurrences, as is a sort of funny-like-a-commercial-is-funny humor that satisfies a checklist of what a joke is but doesn’t really evoke laughter.