I’ll add that LLMs seem fond of bolding things too and my mind now has “lots of phrases” bolded as a strong heuristic for LLM. Which is unfortunate, because I see the usefulness if it’s well done.
Ah, lol, the LLM drafts were done by models with about 200K tokens of my past blog posts in the context window, so they’re pretty good at imitating my style/fairly high amount of bolding. Most of it was organic and human added though!
Mild counterevidence: I’ve probably read enough of your writing that the bolding seemed more distinctively Neelean(?) than LLM-esque, but I expect most readers to have read more output from the latter on priors...
I’ll add that LLMs seem fond of bolding things too and my mind now has “lots of phrases” bolded as a strong heuristic for LLM. Which is unfortunate, because I see the usefulness if it’s well done.
Ah, lol, the LLM drafts were done by models with about 200K tokens of my past blog posts in the context window, so they’re pretty good at imitating my style/fairly high amount of bolding. Most of it was organic and human added though!
Mild counterevidence: I’ve probably read enough of your writing that the bolding seemed more distinctively Neelean(?) than LLM-esque, but I expect most readers to have read more output from the latter on priors...