Thanks for the link! I ended up looking through the data and there wasn’t any clear correlation between amount of time spent in research area and p(Doom).
I ran a few averages by both time spent in research area and region of undergraduate study here: https://docs.google.com/spreadsheets/d/1Kp0cWKJt7tmRtlXbPdpirQRwILO29xqAVcpmy30C9HQ/edit#gid=583622504
For the most part, groups don’t differ very much, although as might be expected, more North Americans have a high p(Doom) conditional on HLMI than other regions.
While I think your overall point is very reasonable, I don’t think your experiments provide much evidence for it. Stockfish generally is trained to play the best move assuming its opponent is playing best moves itself. This is a good strategy when both sides start with the same amount of pieces, but falls apart when you do odds games.
Generally the strategy to win against a weaker opponent in odds games is to conserve material, complicate the position, and play for tricks—go for moves which may not be amazing objectively but end up winning material against a less perceptive opponent. While Stockfish is not great at this, top human chess players can be very good at it. For example, a top grandmaster Hikaru Nakamura had a “Botez Gambit Speedrun” (https://www.youtube.com/playlist?list=PL4KCWZ5Ti2H7HT0p1hXlnr9OPxi1FjyC0), where he sacrificed his queen every game and was able to get to 2500 on chess.com, the level of many chess masters.
This isn’t quite the same as your queen odds setup (it is easier), and the short time format he is on is a factor, but I assume he would be able to beat most sub-1500 FIDE players with queen odds. A version of Stockfish trained to exploit a human’s subpar ability would presumably do even better.