Academic research tends to randomize everything that can be randomized, including the orders of the different IAT phases, so your first concern shouldn’t be an issue in published research. (The keyword for this is “order effect.”)
The IAT is one of several different measures of implicit attitudes which are used in research. When taking the IAT it is transparent to the participant what is being tested in each phase, so people could try harder on some trials than on others, but that is not the case with many of the other tests (many use subliminal priming, e.g. flashing either a black man’s face or a white man’s face on the screen for 20ms immediately before showing the stimulus that participants are instructed to respond to). The different measures tend to produce relatively similar results, which suggests that effort doesn’t have that big of an effect (at least for most people). I suspect that this transparency is part of the reason why the IAT has caught on in popular culture—many people taking the test have the experience of it getting harder when they’re doing a “mismatched” pairing; they don’t need to rely solely on the website’s report of their results.
The survey that you took is not part of the IAT. It is probably a separate, explicit measure of attitudes about race and/or gender (do any of these questions look familiar?).
None of those questions were on the survey, but some of the questions on the survey were similar.
The descriptions of the other measures of implicit attitudes given on that page aren’t in-depth enough for me to critique them effectively for methodology. The first question that comes to mind though, is to what extent these tests have been calibrated against associations that we already know about. For example, if people are given implicit association tests which match words with pictures of, say, smiling children with candy versus pictures of people with injuries, how do they score?
Academic research tends to randomize everything that can be randomized, including the orders of the different IAT phases, so your first concern shouldn’t be an issue in published research. (The keyword for this is “order effect.”)
The IAT is one of several different measures of implicit attitudes which are used in research. When taking the IAT it is transparent to the participant what is being tested in each phase, so people could try harder on some trials than on others, but that is not the case with many of the other tests (many use subliminal priming, e.g. flashing either a black man’s face or a white man’s face on the screen for 20ms immediately before showing the stimulus that participants are instructed to respond to). The different measures tend to produce relatively similar results, which suggests that effort doesn’t have that big of an effect (at least for most people). I suspect that this transparency is part of the reason why the IAT has caught on in popular culture—many people taking the test have the experience of it getting harder when they’re doing a “mismatched” pairing; they don’t need to rely solely on the website’s report of their results.
The survey that you took is not part of the IAT. It is probably a separate, explicit measure of attitudes about race and/or gender (do any of these questions look familiar?).
None of those questions were on the survey, but some of the questions on the survey were similar.
The descriptions of the other measures of implicit attitudes given on that page aren’t in-depth enough for me to critique them effectively for methodology. The first question that comes to mind though, is to what extent these tests have been calibrated against associations that we already know about. For example, if people are given implicit association tests which match words with pictures of, say, smiling children with candy versus pictures of people with injuries, how do they score?