I too have grown increasingly skeptical that meta-analysis in its typical form does anything all that useful.
Unfortunately, people can be bad at understanding meta-analyses. If you have studies that disagree like 50⁄50, it’s not necessarily true that half did something wrong. It’s possible there is a legitimate hidden moderator that changes the effect of the variable (probably being revealed by the meta-analysis but not picked up sufficiently by popular reporting). Or even revealing that half have a fatal flaw would be a contribution of the meta-analysis! Sometimes the effects are not totally comparable, in which case that should either be modeled/adjusted, or excluded (probably already considered by the meta-analyst [though the salient counter-examples where the researcher(s) screw up confirm that not all papers are perfect, though this is not unique to meta-analyses]). It is indeed problematic when a meta-analysis concludes the average effect is the effect, particularly with a bimodal distribution of effect sizes (would be crazy to conclude that in that case!).
The alternative “look at the studies individually” suffers from all the same issues: garbage in, garbage out; hidden moderators; non-comparability; over-concluding from an average. At least meta-analysis brings some systematic thinking to the evaluation of the literature. A strong meta-analysis interacts with these issues and hopefully avoids the pitfalls because it does what a weak meta-analysis does not—it explains inclusion criteria, considers variability, and explains differences rather than just reporting a mean.
Unfortunately, people can be bad at understanding meta-analyses. If you have studies that disagree like 50⁄50, it’s not necessarily true that half did something wrong. It’s possible there is a legitimate hidden moderator that changes the effect of the variable (probably being revealed by the meta-analysis but not picked up sufficiently by popular reporting). Or even revealing that half have a fatal flaw would be a contribution of the meta-analysis! Sometimes the effects are not totally comparable, in which case that should either be modeled/adjusted, or excluded (probably already considered by the meta-analyst [though the salient counter-examples where the researcher(s) screw up confirm that not all papers are perfect, though this is not unique to meta-analyses]). It is indeed problematic when a meta-analysis concludes the average effect is the effect, particularly with a bimodal distribution of effect sizes (would be crazy to conclude that in that case!).
The alternative “look at the studies individually” suffers from all the same issues: garbage in, garbage out; hidden moderators; non-comparability; over-concluding from an average. At least meta-analysis brings some systematic thinking to the evaluation of the literature. A strong meta-analysis interacts with these issues and hopefully avoids the pitfalls because it does what a weak meta-analysis does not—it explains inclusion criteria, considers variability, and explains differences rather than just reporting a mean.
Agree that good meta analyses are good, and don’t get these things wrong.
If only most papers, including meta analyses, didn’t suck. Alas.