I recognise that problem as well. Unfortunately it has a really large number of advantages. Not only might you flatter your reviewers and build comradery with the people you want to cite your papers but you also trigger citation alerts for more people. Google scholar (or presumably alternatives) tell people about “hey, this cited you” and then more of the relevant audience see your paper. Often these “padding references” are not papers you have actually read in full detail. You know the abstract, the conclusion, the figures and maybe saw a familiar equation then joined the dots. “Oh, its like their paper from last year, but they applied the method too...”
My ideal solution (although I have never actually tried this with a journal) would actually be so split the reference list into two sections, “critical references” (up to maybe 4 things that really set up what you are doing). Then the “other references” where you cite the most recent paper from every other researcher working on the topic.
But yes, I have so many times gone down pointless rabbit holes when a paper says “we used the method of [4,5].” I (naturally) look at [4] first, and none of it makes any sense to me. Then I look at [5] and its exactly the same notation as the first paper I was looking at and explains the method well. [4] was the slightly-rubbish version of the method that came first. The paper could have just cited [5] (the improved version they actually used), but the incentives were wrong.
I recognise that problem as well. Unfortunately it has a really large number of advantages. Not only might you flatter your reviewers and build comradery with the people you want to cite your papers but you also trigger citation alerts for more people. Google scholar (or presumably alternatives) tell people about “hey, this cited you” and then more of the relevant audience see your paper. Often these “padding references” are not papers you have actually read in full detail. You know the abstract, the conclusion, the figures and maybe saw a familiar equation then joined the dots. “Oh, its like their paper from last year, but they applied the method too...”
My ideal solution (although I have never actually tried this with a journal) would actually be so split the reference list into two sections, “critical references” (up to maybe 4 things that really set up what you are doing). Then the “other references” where you cite the most recent paper from every other researcher working on the topic.
But yes, I have so many times gone down pointless rabbit holes when a paper says “we used the method of [4,5].” I (naturally) look at [4] first, and none of it makes any sense to me. Then I look at [5] and its exactly the same notation as the first paper I was looking at and explains the method well. [4] was the slightly-rubbish version of the method that came first. The paper could have just cited [5] (the improved version they actually used), but the incentives were wrong.