Hey Neel, I’ve heard you make similar remarks informally at talks or during Q&A sessions in past in-person panels and events, and it’s great that you’ve written them up so that they’re available in a nuanced format to a broader audience. I agree with the points you’ve made, but have a slightly different perspective on how it connects to the example of people asking for your strategic takes specifically, which I’ll share below (without presumption).
TL;DR: “Good strategic takes are hard to measure; but status is easy to recognize”.
I. Executive Summary
People aren’t necessarily confusing research prowess with strategic insight. Rather, they recognize you as having achieved elite social standing within the field of AI more broadly and want:
Access to perceived insider knowledge
Connection to high-status individuals
The latest thinking from those “in the room”
II. Always Use The Best Introduction
Before reading this post, I believed that the median person asking these questions was motivated by your impressive academic performance during your undergraduate studies, something that can be (over)simplified to “wow, this guy studied pure math at Cambridge and ranked top of his class, he’s one of the smartest people in the world, and smart people are correct about lots of things, he might have a correct answer to this question I have!”. I’m quite embarrassed to admit that this is pretty much what was going through my head when I attended a session you were holding during EAG last year, and I wouldn’t be surprised if others there were thinking that too.
Similarly along those lines, I recall reaching out to one of your former mentees for a 1:1 thinking, “wow, this guy studied computer science at Cambridge and ranked top of his class, he’s one of the smartest people in the world, and smart people are correct about lots of things!”. I also took the time to read his dissertation, and found it interesting, but that first impression mattered a lot more than it should have. An analogy is that when people are selecting the model to use for a task, they want to use the best model for that task. But if a model takes the top spot on the leaderboard where test scores are easy to measure, then that tends to mess with human psychology which irrationally pattern matches and assumes generalization across every possible task.
III. My Key Takeaway:
My key takeaway was that although that this winner-take-all dynamic may have played one factor, your model assigns more weightage towards the work you’ve done after graduating and pioneering the field of mechinterp.
IV. Credentials vs. Accomplishments
To be clear, founding mechinterp is a greater accomplishment than any formal credential. But even though teams of researchers at frontier labs are working on this agenda, it’s not mainstream yet (just take a look at mechinterp.com), whereas the handle of “math/cs genius” is generic enough as a concept to be legible to the average person. The arguments in your post about research being an empirical science requiring skills not especially relevant to strategy are locally valid, but these points are the furthest thing from the mind of those waiting in line at conferences to ask what your p(doom) is.
V. The Tyranny of the Marginal Spice Jar
Often the demands placed upon us by our environment play an instrumental role in shaping our skillset, because we adapt against the pressures placed upon us. I’m thankfully not in a leadership position where the role calls for executive project management decisions which require a solid understanding of the broader field and industry. I’m also grateful that I’m not a public figure with a reputation to maintain whose every move is open to scrutiny and close examination. I also understand that blog posts aren’t meant to be epistemically bulletproof.
I think that it’s true that when the people you speak with the most (e.g work colleagues or MATS scholars) ask you about your thoughts, their respect is based on the merits of the technical research you’ve published. And in general, when anyone publishes great AI research, then that does inspire interest in that person’s AI takes.
VI. Unnecessarily Skippable Digression Into Social Bubbles and Selection Effects
Your social circle is heavily filtered by a competitive application process which strongly selects for predicted ability to do quality research. This can distort intuitions around the prevalence of certain traits which are not as well represented in the common population. For example, authoring code or research papers requires to some extent that your brain is adapted for processing text content, the implications of which I haven’t seen discussed in depth anywhere on lesswrong. If someone expresses a strong preference for reading above watching a video when both options are available, it’s almost like a secret handshake, because so many cracked engineers have told me this that it’s become a green flag. In this world, entertainment culture and information transfer happens from books, web novels, articles, etc.
There’s an entirely separate world occupied by someone with the opposite preference, i.e wanting to watching a video above reading text when both options are available, an example secret handshake for that is when my Uber driver tells me that they’re cutting down on instagram. I admit this is a shallow heuristic but it’s become a red flag I watch out for indicating a potential vulnerability to predatory social media dark patterns or television binge-watching. It’s not an issue of self-control, people in the first group need to apply cognitive effort to pick things up from videos, but might have difficulties setting aside an engaging fantasy web serial. Most treatment of this topic I’ve seen addresses the second group, which feels alienating to me, as if there’s this ongoing dimorphism between producers and users of consumer software.
I’m typically skeptical of “high IQ bubble” typed arguments since they tend to prove too much, so I’ll make a more specific point. I agree with you that within these groups, conflation between perceived research skills and strategic skill does occur. My (minor) contention is that I don’t think that this particular mistake is the one being made by the average person asking a speaker about their strategic takes at the end of a talk.
VII. Main Argument: Research Takes?! What Research Takes?!!
Like, these sort of questions aren’t just being fielded by researchers in the field, you know. Why do people ask random celebrities and movie stars about their takes on geopolitics? Are they genuinely conflating acting skill with strategic skill? What about pro athletes? Is physical skill being conflated with strategic skill too? Do you believe that if a rich heiress with no research background was giving a talk about AI risk, that no one in the audience would be interested in her big picture takes? It makes no sense. Other comments have pointed this out already, so I’m sorry about adding another rant to the pile, but there exists a simpler explanation which does a better job of tracking reality!
The missing ingredient here is clout.
Various essays go into the relationship between competence and power, but what you’re describing as “research skill” can be renamed expertise. These folks aren’t mistaking you for someone high in “strategic skill”, instead they are making the correct inference that you are an elite. They want in on the latest gossip behind the waitlist at the exclusive private social where frontier lab employees are joking around about what name they’ll use for tomorrow’s new model. They’re holding their breath waiting for invention and hyperstition and self-fulfilling prophecy. They want to know the story of how Elon Musk will save the U.S AISI and call it xAISI.
IX. Concluding Apologetic Remarks
I’m not sure if this was an aim for the above post, but it’s an understandable impulse to want to distance oneself from scenes where it’s easier to find elites (good strategic takes) than experts (good research takes), because there can be a certain culture attached which often fails to act in a way that consistently upholds virtuous truth-seeking.
Overall, I think that taking a public stance can warp the landscape being describing in ways that are hard to predict, and appreciate your approach here compared to the influencer extreme of “my strategic takes are all great, the best, and bigly” versus the corporate extreme of “oh there are so many great takes, how could I pick one, great takes, thanks all”. The position of “yeah I’ve got takes but chill they’re mid” is a reasonable midpoint, and it would be nice to have people defer more intelligently in general.
Hey Neel, I’ve heard you make similar remarks informally at talks or during Q&A sessions in past in-person panels and events, and it’s great that you’ve written them up so that they’re available in a nuanced format to a broader audience. I agree with the points you’ve made, but have a slightly different perspective on how it connects to the example of people asking for your strategic takes specifically, which I’ll share below (without presumption).
TL;DR: “Good strategic takes are hard to measure; but status is easy to recognize”.
I. Executive Summary
People aren’t necessarily confusing research prowess with strategic insight. Rather, they recognize you as having achieved elite social standing within the field of AI more broadly and want:
Access to perceived insider knowledge
Connection to high-status individuals
The latest thinking from those “in the room”
II. Always Use The Best Introduction
Before reading this post, I believed that the median person asking these questions was motivated by your impressive academic performance during your undergraduate studies, something that can be (over)simplified to “wow, this guy studied pure math at Cambridge and ranked top of his class, he’s one of the smartest people in the world, and smart people are correct about lots of things, he might have a correct answer to this question I have!”. I’m quite embarrassed to admit that this is pretty much what was going through my head when I attended a session you were holding during EAG last year, and I wouldn’t be surprised if others there were thinking that too.
Similarly along those lines, I recall reaching out to one of your former mentees for a 1:1 thinking, “wow, this guy studied computer science at Cambridge and ranked top of his class, he’s one of the smartest people in the world, and smart people are correct about lots of things!”. I also took the time to read his dissertation, and found it interesting, but that first impression mattered a lot more than it should have. An analogy is that when people are selecting the model to use for a task, they want to use the best model for that task. But if a model takes the top spot on the leaderboard where test scores are easy to measure, then that tends to mess with human psychology which irrationally pattern matches and assumes generalization across every possible task.
III. My Key Takeaway:
My key takeaway was that although that this winner-take-all dynamic may have played one factor, your model assigns more weightage towards the work you’ve done after graduating and pioneering the field of mechinterp.
IV. Credentials vs. Accomplishments
To be clear, founding mechinterp is a greater accomplishment than any formal credential. But even though teams of researchers at frontier labs are working on this agenda, it’s not mainstream yet (just take a look at mechinterp.com), whereas the handle of “math/cs genius” is generic enough as a concept to be legible to the average person. The arguments in your post about research being an empirical science requiring skills not especially relevant to strategy are locally valid, but these points are the furthest thing from the mind of those waiting in line at conferences to ask what your p(doom) is.
V. The Tyranny of the Marginal Spice Jar
Often the demands placed upon us by our environment play an instrumental role in shaping our skillset, because we adapt against the pressures placed upon us. I’m thankfully not in a leadership position where the role calls for executive project management decisions which require a solid understanding of the broader field and industry. I’m also grateful that I’m not a public figure with a reputation to maintain whose every move is open to scrutiny and close examination. I also understand that blog posts aren’t meant to be epistemically bulletproof.
I think that it’s true that when the people you speak with the most (e.g work colleagues or MATS scholars) ask you about your thoughts, their respect is based on the merits of the technical research you’ve published. And in general, when anyone publishes great AI research, then that does inspire interest in that person’s AI takes.
VI. Unnecessarily Skippable Digression Into Social Bubbles and Selection Effects
Your social circle is heavily filtered by a competitive application process which strongly selects for predicted ability to do quality research. This can distort intuitions around the prevalence of certain traits which are not as well represented in the common population. For example, authoring code or research papers requires to some extent that your brain is adapted for processing text content, the implications of which I haven’t seen discussed in depth anywhere on lesswrong. If someone expresses a strong preference for reading above watching a video when both options are available, it’s almost like a secret handshake, because so many cracked engineers have told me this that it’s become a green flag. In this world, entertainment culture and information transfer happens from books, web novels, articles, etc.
There’s an entirely separate world occupied by someone with the opposite preference, i.e wanting to watching a video above reading text when both options are available, an example secret handshake for that is when my Uber driver tells me that they’re cutting down on instagram. I admit this is a shallow heuristic but it’s become a red flag I watch out for indicating a potential vulnerability to predatory social media dark patterns or television binge-watching. It’s not an issue of self-control, people in the first group need to apply cognitive effort to pick things up from videos, but might have difficulties setting aside an engaging fantasy web serial. Most treatment of this topic I’ve seen addresses the second group, which feels alienating to me, as if there’s this ongoing dimorphism between producers and users of consumer software.
I’m typically skeptical of “high IQ bubble” typed arguments since they tend to prove too much, so I’ll make a more specific point. I agree with you that within these groups, conflation between perceived research skills and strategic skill does occur. My (minor) contention is that I don’t think that this particular mistake is the one being made by the average person asking a speaker about their strategic takes at the end of a talk.
VII. Main Argument: Research Takes?! What Research Takes?!!
Like, these sort of questions aren’t just being fielded by researchers in the field, you know. Why do people ask random celebrities and movie stars about their takes on geopolitics? Are they genuinely conflating acting skill with strategic skill? What about pro athletes? Is physical skill being conflated with strategic skill too? Do you believe that if a rich heiress with no research background was giving a talk about AI risk, that no one in the audience would be interested in her big picture takes? It makes no sense. Other comments have pointed this out already, so I’m sorry about adding another rant to the pile, but there exists a simpler explanation which does a better job of tracking reality!
The missing ingredient here is clout.
Various essays go into the relationship between competence and power, but what you’re describing as “research skill” can be renamed expertise. These folks aren’t mistaking you for someone high in “strategic skill”, instead they are making the correct inference that you are an elite. They want in on the latest gossip behind the waitlist at the exclusive private social where frontier lab employees are joking around about what name they’ll use for tomorrow’s new model. They’re holding their breath waiting for invention and hyperstition and self-fulfilling prophecy. They want to know the story of how Elon Musk will save the U.S AISI and call it xAISI.
IX. Concluding Apologetic Remarks
I’m not sure if this was an aim for the above post, but it’s an understandable impulse to want to distance oneself from scenes where it’s easier to find elites (good strategic takes) than experts (good research takes), because there can be a certain culture attached which often fails to act in a way that consistently upholds virtuous truth-seeking.
Overall, I think that taking a public stance can warp the landscape being describing in ways that are hard to predict, and appreciate your approach here compared to the influencer extreme of “my strategic takes are all great, the best, and bigly” versus the corporate extreme of “oh there are so many great takes, how could I pick one, great takes, thanks all”. The position of “yeah I’ve got takes but chill they’re mid” is a reasonable midpoint, and it would be nice to have people defer more intelligently in general.
Off-topic, but what the heck is “The Tyranny of the Marginal Spice Jar”?