Not sure I find those convincing, since we can already ask for TL;DRs and can do the curating ourselves and yet the issue persists.
I’m partial to the take that LLMs lack things to say, as they lack the kind of background thinking and noticing that cause us to notice connections we weren’t explicitly seeking, which when combined with distinctive thinking styles and aesthetic preferences set up OODA loops resulting in wildly divergent deep world-models, that then lead to interesting collisions: comments, conversations etc. I think of taste as key to this, which reminds me of Gwern’s idea:
And my draft theory of mathematicians essay is about the meta-RL view of math research suggesting that ‘taste’ reduces down to a relatively few parameters which are learned blackbox style as a bi-level optimization problem and that may be how we can create ‘LLM creative communities’ (eg. to extract out small sets of prompts/parameters which all run on a ‘single’ LLM for feedback as personas or to guide deep search on a prompt).
At risk of being sloppy, the ‘LLM creative communities’ thing makes me think of how Moltbook agents already write stuff worth reading, sometimes.
Not sure I find those convincing, since we can already ask for TL;DRs and can do the curating ourselves and yet the issue persists.
I’m partial to the take that LLMs lack things to say, as they lack the kind of background thinking and noticing that cause us to notice connections we weren’t explicitly seeking, which when combined with distinctive thinking styles and aesthetic preferences set up OODA loops resulting in wildly divergent deep world-models, that then lead to interesting collisions: comments, conversations etc. I think of taste as key to this, which reminds me of Gwern’s idea:
At risk of being sloppy, the ‘LLM creative communities’ thing makes me think of how Moltbook agents already write stuff worth reading, sometimes.