Ah, lol, the LLM drafts were done by models with about 200K tokens of my past blog posts in the context window, so they’re pretty good at imitating my style/fairly high amount of bolding. Most of it was organic and human added though!
Mild counterevidence: I’ve probably read enough of your writing that the bolding seemed more distinctively Neelean(?) than LLM-esque, but I expect most readers to have read more output from the latter on priors...
Ah, lol, the LLM drafts were done by models with about 200K tokens of my past blog posts in the context window, so they’re pretty good at imitating my style/fairly high amount of bolding. Most of it was organic and human added though!
Mild counterevidence: I’ve probably read enough of your writing that the bolding seemed more distinctively Neelean(?) than LLM-esque, but I expect most readers to have read more output from the latter on priors...