I don’t think I quite understand the distinction you are trying to draw between “founders” and (not a literal quote) “people who do object-level work and make intellectual contributions by writing”.
If you’re the CEO of a company, it’s your job to understand the space your company works in and develop extremely good takes about where the field is going and what your company should do, and use your expertise in leveraged ways to make the company go better.
In the context of AI safety, the key product that organizations are trying to produce is often itself research, and a key input is hiring talented people. So I think it makes a lot of sense that e.g. I spend a lot of my time thinking about the research that’s happening at my org.
Analogously, I don’t think it should be considered surprising or foolish if Elon Musk knows a lot about rockets and spends a lot of his time talking to engineers about rockets.
I do think that I am personally more motivated to do novel intellectual work than would be optimal for Redwood’s interests.
I also think that the status gradients and social pressures inside the AI safety community have a variety of distorting effects on my motivations that probably cause me to take worse actions.
I think you personally feel the status gradient problems more than other AI safety executives do because a lot of AI safety people undervalue multiplier efforts. And this has meant that working at MATS is less prestigious and therefore has more trouble hiring than I’d like.
I think you’re a great example of a successful founder who is also a prolific researcher and writer. I wish I had your capacity for the last two; you’ve been high impact in all three channels!
I think you’re right in that research startups should generally be led by researchers, and good researchers track the field closely and ideally publish. I think at some size of organization, this becomes much harder, but I don’t want to deter it! If Elon wants to go deep on his rockets, this seems good, even if he’s an outlier CEO.
I was trying to say two somewhat related things in this article:
The status gradients strongly favor “become a researcher” over “become a founder”, which means we have less founders than ideal and our successful founders tend to follow the “lab PI” archetype, for better or worse.
Implied: there is plenty of value that founders in non-research roles can have (field-building, advocacy, product development, etc.) and this is systematically undervalued relative to the impact, which discourages people from trying.
For your point 2, are you thinking about founders in organizations that have theories of change other than doing research? Or are you thinking of founders at research orgs?
The former. Even large research nonprofits (e.g., RAND, AI2, ATI, SFI) tend to be led by people with research experience, though they probably do a lot less research than CEOs at small research orgs.
I don’t think I quite understand the distinction you are trying to draw between “founders” and (not a literal quote) “people who do object-level work and make intellectual contributions by writing”.
If you’re the CEO of a company, it’s your job to understand the space your company works in and develop extremely good takes about where the field is going and what your company should do, and use your expertise in leveraged ways to make the company go better.
In the context of AI safety, the key product that organizations are trying to produce is often itself research, and a key input is hiring talented people. So I think it makes a lot of sense that e.g. I spend a lot of my time thinking about the research that’s happening at my org.
Analogously, I don’t think it should be considered surprising or foolish if Elon Musk knows a lot about rockets and spends a lot of his time talking to engineers about rockets.
I do think that I am personally more motivated to do novel intellectual work than would be optimal for Redwood’s interests.
I also think that the status gradients and social pressures inside the AI safety community have a variety of distorting effects on my motivations that probably cause me to take worse actions.
I think you personally feel the status gradient problems more than other AI safety executives do because a lot of AI safety people undervalue multiplier efforts. And this has meant that working at MATS is less prestigious and therefore has more trouble hiring than I’d like.
I think you’re a great example of a successful founder who is also a prolific researcher and writer. I wish I had your capacity for the last two; you’ve been high impact in all three channels!
I think you’re right in that research startups should generally be led by researchers, and good researchers track the field closely and ideally publish. I think at some size of organization, this becomes much harder, but I don’t want to deter it! If Elon wants to go deep on his rockets, this seems good, even if he’s an outlier CEO.
I was trying to say two somewhat related things in this article:
The status gradients strongly favor “become a researcher” over “become a founder”, which means we have less founders than ideal and our successful founders tend to follow the “lab PI” archetype, for better or worse.
Implied: there is plenty of value that founders in non-research roles can have (field-building, advocacy, product development, etc.) and this is systematically undervalued relative to the impact, which discourages people from trying.
For your point 2, are you thinking about founders in organizations that have theories of change other than doing research? Or are you thinking of founders at research orgs?
The former. Even large research nonprofits (e.g., RAND, AI2, ATI, SFI) tend to be led by people with research experience, though they probably do a lot less research than CEOs at small research orgs.