Thank you for doing this! I guess I’ll use the Steinhardt and Gates materials as my go-to from now on until something better comes along!
Given the unifying theme of the qualitative comments*, I’d love to see a follow-up study in which status effects are controlled for somehow. Like, suppose you used the same articles/posts/etc., but swapped the names of the authors, so that e.g. the high-status* ML people were listed as authors of “Why alignment could be hard with modern deep learning.”
*I think that almost all of the qualitative comments you list are the sort of thing that seems heavily influenced by status—e.g. when someone you respect says X, it’s deep and insightful and “makes you think,” when a rando you don’t respect says X, it’s “speculative” and “philosophical” and not “empirical.”
**High status among random attendees of ML conferences, that is. Different populations have different status hierarchies.
Agreed that status / perceived in-field expertise seems pretty important here, especially as seen through the qualitative results (though the Gates talk did surprisingly well, given not an AI researcher, but the content reflects that). We probably won’t have [energy / time / money] + [we have limited access to researchers] to test something like this, but I think we can hold “status is important” as something pretty true given these results, Hobbhann’s (https://forum.effectivealtruism.org/posts/kFufCHAmu7cwigH4B/lessons-learned-from-talking-to-greater-than-100-academics), and a ton of anecdotal evidence from a number of different sources.
(I also think the Sam Bowman article is a great article to recommend, and in fact recommend that first a lot of the time.)
Thank you for doing this! I guess I’ll use the Steinhardt and Gates materials as my go-to from now on until something better comes along!
Given the unifying theme of the qualitative comments*, I’d love to see a follow-up study in which status effects are controlled for somehow. Like, suppose you used the same articles/posts/etc., but swapped the names of the authors, so that e.g. the high-status* ML people were listed as authors of “Why alignment could be hard with modern deep learning.”
*I think that almost all of the qualitative comments you list are the sort of thing that seems heavily influenced by status—e.g. when someone you respect says X, it’s deep and insightful and “makes you think,” when a rando you don’t respect says X, it’s “speculative” and “philosophical” and not “empirical.”
**High status among random attendees of ML conferences, that is. Different populations have different status hierarchies.
Agreed that status / perceived in-field expertise seems pretty important here, especially as seen through the qualitative results (though the Gates talk did surprisingly well, given not an AI researcher, but the content reflects that). We probably won’t have [energy / time / money] + [we have limited access to researchers] to test something like this, but I think we can hold “status is important” as something pretty true given these results, Hobbhann’s (https://forum.effectivealtruism.org/posts/kFufCHAmu7cwigH4B/lessons-learned-from-talking-to-greater-than-100-academics), and a ton of anecdotal evidence from a number of different sources.
(I also think the Sam Bowman article is a great article to recommend, and in fact recommend that first a lot of the time.)