This OP would mostly correspond to what ClearerThinking calls “noisy test of skill”. But ClearerThinking also goes through various other statistical artifacts impacting Dunning-Kruger studies, plus some of their own data analysis. Here’s (part of) their upshot:
The simulations above are remarkable because they show that when researchers are careful to avoid “fake” Dunning-Kruger effects, the real patterns that emerge in Dunning-Kruger studies, can typically be reproduced with just two assumptions:
Closer-To-The-Average Effect: people predict their skill levels to be closer to the mean skill level than they really are. This could be rational (when people simply have limited evidence about their true skill level), or irrational (if people still do this strongly when they have lots of evidence about their skill, then they are not adjusting their predictions enough based on that evidence).
Better-Than-Average Effect: on average, people tend to irrationally predict they are above average at skills. While this does not happen on every skill, it is known to happen for a wide range of skills. This bias is not the same thing as the Dunning-Kruger effect, but it shows up in Dunning-Kruger plots.
Spencer Greenberg (@spencerg) & Belen Cobeta at ClearerThinking.org have a more thorough and well-researched discussion at: Study Report: Is the Dunning-Kruger Effect real? (Also, their slightly-shorter blog post summary.)
This OP would mostly correspond to what ClearerThinking calls “noisy test of skill”. But ClearerThinking also goes through various other statistical artifacts impacting Dunning-Kruger studies, plus some of their own data analysis. Here’s (part of) their upshot: