If I may take the liberty for a somewhat broader take, what I value from human literature review over chatbot assistant slop (*cough* “research by LRMs/agents with internet search”) is:
judgement (I mean having any spine whatsoever, even if “good judgement” were a better desiderata .. but that would be asking for sycophancy so I am not asking for any self-defining/defying qualities) and
faithful reasoning (frankly, I am not going to follow the steps to do my own research, but if I imagine I would be a better person and do my own research, it makes me feel better to see the steps / a recipe how I could form my own conclusions following a sound methodology … what are the cruxes where “reasonable people can disagree” vs what are the tar pits where there’s “no fucking way anyone shall possibly believe anything other than X”) - I don’t care whether I agree or disagree with an opinion, but I want to see firm hinges in arguments to help me understand multiple perspectives, anything is better than extreme vagueness that doesn’t say anything of substance using too many words to not say it (called “AI slop” these days but that style of prose was invented long before AI and I am allergic to it … I don’t believe you are at any risk of producing that, so please keep that quality whatever else you might change)
..for example, if there are multiple reasons why a study is bad, it would be enough (for me) to explain in details only the worst reason without a long list of all bad things (if sample size was small, but there was also a bigger problem for which increasing sample size would not help anyway, it’s fine to summarize all minor flaws in 1 sarcastic sentence and go into explanation just for the worst mistake they made, why their methodology could not possibly prove anything about the topic one way or the other … or moving all extra info into appendix A-J or <details> element might be helpful to keep stuff short(er)((ish)))
If I may take the liberty for a somewhat broader take, what I value from human literature review over chatbot assistant slop (*cough* “research by LRMs/agents with internet search”) is:
judgement (I mean having any spine whatsoever, even if “good judgement” were a better desiderata .. but that would be asking for sycophancy so I am not asking for any self-defining/defying qualities) and
faithful reasoning (frankly, I am not going to follow the steps to do my own research, but if I imagine I would be a better person and do my own research, it makes me feel better to see the steps / a recipe how I could form my own conclusions following a sound methodology … what are the cruxes where “reasonable people can disagree” vs what are the tar pits where there’s “no fucking way anyone shall possibly believe anything other than X”) - I don’t care whether I agree or disagree with an opinion, but I want to see firm hinges in arguments to help me understand multiple perspectives, anything is better than extreme vagueness that doesn’t say anything of substance using too many words to not say it (called “AI slop” these days but that style of prose was invented long before AI and I am allergic to it … I don’t believe you are at any risk of producing that, so please keep that quality whatever else you might change)
..for example, if there are multiple reasons why a study is bad, it would be enough (for me) to explain in details only the worst reason without a long list of all bad things (if sample size was small, but there was also a bigger problem for which increasing sample size would not help anyway, it’s fine to summarize all minor flaws in 1 sarcastic sentence and go into explanation just for the worst mistake they made, why their methodology could not possibly prove anything about the topic one way or the other … or moving all extra info into appendix A-J or <details> element might be helpful to keep stuff short(er)((ish)))