Thanks for writing this up! Two questions I’m curious if you have good data on:
How do silent preferences line up with stated preferences? Eg “average attractiveness ratings assigned on a survey” vs “what people put in their dating profile”. I assume more people will have strong revealed preferences than will have strong stated preferences, but I’m curious by how much and if it ends up being in somewhat different directions (and if there are people who state a preference but seem to be wrong about themselves, or for whom that preference isn’t a function of their assessments of individual attractiveness).
I know you deliberately simplified to broad categories for this post (I assume also because a lot of data is on eg dating profiles where people use broad categories to describe their preferences), but if you know of more granular data on how the weird asymmetries among Asian people break down, I’d be curious—I assume the full story is a lot more complicated.
Technological changes make this difficult to track. A lot of the research on explicit racial preferences in online dating looks at the last generation of online dating platforms (Match, Yahoo Personals, etc.) because when you specify racial preferences it is visible on your profile. The current generation of platforms do not do this generally, and instead race preferences are specified on the matching side. I’m assuming then that stated preferences have dropped very low, in part because they are functionally unnecessary. Grindr had but recently removed algorithmic race filtering, causing men to put racial preference in their profiles again, but it seems like there is community consensus that this is bad manners. Another anecdote, there is a man in Nathan Fielder’s The Rehearsal Season 2 that claimed to have been kicked off of every dating platform due to writing “no t-girls” in his profiles. This man seemed an unreliable narrator, and in general seemed socially clueless, but it does seem plausible that you could be on the receiving end of a moderator action for explicitly stating preferences.
So more recent experiments on race preference are typically surveys done in-person or on Qualtrics. Gendered Black Exclusion (2014) could be taken as representative of this type of research, which finds:
Which are comparable exclusion rates to what Robnett & Feliciano found with the 2005 Yahoo Personals data. Meanwhile, explicit races preferences have probably dropped quite low. This 2022 article is also useful data.
This doesn’t quite get to the core of your question, which is about how people’s actual behavior around having racial preferences can diverge from their ability to articulate to others (and themselves!) that they have such a preference. Maybe to get some estimates we can take Robnett & Feliciano’s data as representative of the latter, and compare that to the analysis of reply rates by race in the OkTrendsdata as representative of the former.
So the reply rate data shows the same directional trends as the Robnett & Feliciano data, but there are clearly a lot of interpretative subtleties here that I leave to you.
Comparing the reply rate data to the match % data, this suggests to me the possibility that people say and think they have strong race preferences even when they do not (maybe these preferences soften when you put yourself in an actual position to not follow them). The difficulty is that this data is conditional on the reply rates, which in turn are conditional on sending rates, which in turn are conditional on stated racial preferences (you probably won’t send messages to people who say they are excluding your race), so it could be that the match % data merely shows that people actually have super accurate explicit race preferences.
2. The above OkTrends data gives more sub-Asian granularity (here’s another blogpost). Outside of this I don’t recall seeing many studies with high granularity. This one looking at Chinese, Japanese, and Korean international students might prove of interest.
Thanks for writing this up! Two questions I’m curious if you have good data on:
How do silent preferences line up with stated preferences? Eg “average attractiveness ratings assigned on a survey” vs “what people put in their dating profile”. I assume more people will have strong revealed preferences than will have strong stated preferences, but I’m curious by how much and if it ends up being in somewhat different directions (and if there are people who state a preference but seem to be wrong about themselves, or for whom that preference isn’t a function of their assessments of individual attractiveness).
I know you deliberately simplified to broad categories for this post (I assume also because a lot of data is on eg dating profiles where people use broad categories to describe their preferences), but if you know of more granular data on how the weird asymmetries among Asian people break down, I’d be curious—I assume the full story is a lot more complicated.
Thanks for the great questions.
Technological changes make this difficult to track. A lot of the research on explicit racial preferences in online dating looks at the last generation of online dating platforms (Match, Yahoo Personals, etc.) because when you specify racial preferences it is visible on your profile. The current generation of platforms do not do this generally, and instead race preferences are specified on the matching side. I’m assuming then that stated preferences have dropped very low, in part because they are functionally unnecessary. Grindr had but recently removed algorithmic race filtering, causing men to put racial preference in their profiles again, but it seems like there is community consensus that this is bad manners. Another anecdote, there is a man in Nathan Fielder’s The Rehearsal Season 2 that claimed to have been kicked off of every dating platform due to writing “no t-girls” in his profiles. This man seemed an unreliable narrator, and in general seemed socially clueless, but it does seem plausible that you could be on the receiving end of a moderator action for explicitly stating preferences.
So more recent experiments on race preference are typically surveys done in-person or on Qualtrics. Gendered Black Exclusion (2014) could be taken as representative of this type of research, which finds:
Which are comparable exclusion rates to what Robnett & Feliciano found with the 2005 Yahoo Personals data. Meanwhile, explicit races preferences have probably dropped quite low. This 2022 article is also useful data.
This doesn’t quite get to the core of your question, which is about how people’s actual behavior around having racial preferences can diverge from their ability to articulate to others (and themselves!) that they have such a preference. Maybe to get some estimates we can take Robnett & Feliciano’s data as representative of the latter, and compare that to the analysis of reply rates by race in the OkTrends data as representative of the former.
So the reply rate data shows the same directional trends as the Robnett & Feliciano data, but there are clearly a lot of interpretative subtleties here that I leave to you.
Comparing the reply rate data to the match % data, this suggests to me the possibility that people say and think they have strong race preferences even when they do not (maybe these preferences soften when you put yourself in an actual position to not follow them). The difficulty is that this data is conditional on the reply rates, which in turn are conditional on sending rates, which in turn are conditional on stated racial preferences (you probably won’t send messages to people who say they are excluding your race), so it could be that the match % data merely shows that people actually have super accurate explicit race preferences.
2. The above OkTrends data gives more sub-Asian granularity (here’s another blogpost). Outside of this I don’t recall seeing many studies with high granularity. This one looking at Chinese, Japanese, and Korean international students might prove of interest.