Needless to say, writing papers and getting them into ML conferences is time-consuming. There’s an opportunity cost. Is it worth doing despite the opportunity cost? I presume that, for the particular people you talked to, and the particular projects they were doing, your judgment was “Yes the opportunity cost was well worth paying”. And I’m in no position to disagree—I don’t know the details. But I wouldn’t want to make any blanket statements. If someone says the opportunity cost is not worth it for them, I see that as a claim that a priori might be true or false. Your post seems to imply that almost everyone is making an error in the same direction, and therefore funders should put their thumb on the scale. That’s at least not obvious to me.
You seem to be suggesting that people in academia don’t read blog posts, and that blog posts are generically harder to read than papers. Both seem obviously false to me; for example, many peer-reviewed ML papers come along with blog posts, and the blog posts are intended to be the more widely accessible of the two.
Of course, blog posts can be unreadable too. Generally, I think that it’s healthy for people to write BOTH (A) stuff with lots of jargon & technical details that conveys information well to people already in the know AND (B) highly-accessible stuff intended for a broader audience. (That’s what I try to do, at least.) I think it’s true and uncontroversial to say that blog posts are great for (B). I also happen to think that blog posts are great for (A).
Anyway, I think this OP isn’t particularly addressed at me (I have nothing I want to share that would fit in at an ML conference, as opposed to neuroscience), but if anyone cares I’d be happy to discuss in detail why I haven’t written any peer-reviewed papers related to my AI alignment work since I started it full-time 2 years ago, and have no immediate plans to, and what I’ve been doing instead to mitigate any downsides of that decision. It’s not even a close call; this decision seems very overdetermined from my perspective.
If someone says the opportunity cost is not worth it for them, I see that as a claim that a priori might be true or false. Your post seems to imply that almost everyone is making an error in the same direction, and therefore funders should put their thumb on the scale. That’s at least not obvious to me.
I do think this is the wrong calculation, and the error caused by it is widely shared and pushes in the same direction.
Publication is a public good, where most of the benefit accrues to others / the public. Obviously costs to individuals are higher than the benefits to them in far more cases than where costs to individuals are higher than the summed benefits to others. And evaluating good accrued to the researchers is the wrong thing to check—if our goal is aligned AI, the question should be the benefit to the field.
it seems obvious to me that everyone has an incentive to underinvest in (A) relative to (B). You get grants & jobs & status from (B), not (A), right? And papers can be in (B) without being minimally or not at all in (A).
In academia, people talk all the time about how people are optimizing their publication record to the detriment of field-advancement, e.g. making results sound misleadingly original and important, chasing things that are hot, splitting results into unnecessarily many papers, etc. Right?
Hmm, I’m trying to guess where you’re coming from. Maybe you’d propose a model with
(C) “figuring things out”
(D) “communicating those things to the x-risk community”
(E) “communicating those things to the ML community”
And the idea is that someone whose funding & status is coming entirely from the x-risk community has no incentive to do (E). Is that it?
If so, I strongly endorse that (E) is worth doing, to a nonzero extent. But it’s not obvious to me that the AGI x-risk community is collectively underinvesting in (E); I think I lean towards “overinvesting” on the margin. (I repeat: on the margin!! Zero investment is too little!)
I think that everyone who is both motivated by x-risk and employed by a CS department—e.g. CHAI, the OP (Krueger), etc.—is doing (E) intensively all the time, and will keep doing so perpetually. We don’t have to worry about (E) going to zero. If other people do (D) to the exclusion of (E), I think good ideas will trickle out through the above-mentioned people, and/or through ML people getting interested and gradually learning enough jargon to read the (D) stuff.
I think that, for some people / projects, getting their results into an ML conference would cut down the amount of (C) & (D) that gets done by a factor of 2, or even more, or much more when it affects the choice of what to work on in the first place. And I think that’s a very bad tradeoff.
I would say a similar thing about any technical field. I want climate modelers to spend most of their time figuring out how to do better climate modeling, in collaboration with other climate modelers, using climate modeling jargon. Obviously, to some extent, there has to be accessible communication to a wider audience of stakeholders about what’s going on in climate modeling. But that’s going to happen anyway—plenty of people like writing popular books and blog posts and stuff, and kudos to those people, and likewise some stakeholders outside the field will naturally invest in learning climate modeling jargon and injecting themselves into that conversation, and kudos to those people too. Groupthink is bad, interdisciplinarity is good, and so on, but lots of important technical work just can’t be easily communicated to people with no subject-matter expertise or investment in the subfield, and it’s really really bad if that kind of work falls by the wayside.
Then separately, if people are underinvesting in (E), I think it’s non-obvious (and often false) that the solution to that problem is to try to get papers through peer review and into ML conferences.
For one thing, if an x-risk-concerned person can write an ML paper, they can equally well write a blog post that avoids x-risk jargon (and maybe even replaces it with ML jargon), and I think that would have a comparable chance of getting widely read by ML people and successfully communicating substantive ideas to them. It’s not like every paper in an ML conference gets widely read and cited anyway, right? But blog posts take absurdly less time.
For another thing, if we assume for the sake of argument that the “gravitas” / style / cite-ability of academic papers is a feature not a bug, then people can get those by putting a paper onto ML arxiv, and that takes much less time than going through peer-review. I think in some cases that’s a great choice.
To respond briefly, I think that people underinvest in (D), and write sub-par forum posts rather than aim for the degree of clarity that would allow them to do (E) at far less marginal cost. I agree that people overinvest in (B)[1], but also think that it’s very easy to tell yourself your work is “actual progress” when you’re doing work that, if submitted to peer-reviewed outlets, would be quickly demolished as duplicative of work you’re unaware of, or incompletely thought-out in other ways.
I also worry that many people have never written a peer reviewed paper, and aren’t thinking through the tradeoff, they just never develop the necessary skills, and can’t ever move to more academic outlets[2]. I say all of this as someone who routinely writes for both peer-reviewed outlets and for the various forums—my thinking needs to be clearer for reviewed work, and I agree that the extraneous costs are high, but I think that the tradeoff in terms of getting feedback and providing something for others to build on, especially others outside of the narrow EA-motivated community, is often worthwhile.
Edit to add: But yes, I unambiguously endorse starting with writing Arxiv papers, as they get a lot of the benefit without needing to deal with the costs of review. They do fail to get as much feedback, which is a downside. (It’s also relatively easy to put something on Arxiv and submit to a journal for feedback, and decide whether to finish the process after review.)
To be fair, I may be underestimating the costs of learning the skills for those who haven’t done this—but I do think there’s tons of peer mentorship within EA which can work to greatly reduce those costs, if people are willing to use those resources.
I think that the tradeoff in terms of getting feedback and providing something for others to build on, especially others outside of the narrow EA-motivated community, is often worthwhile.
This should be obvious for everyone! As an outside observer and huge sympathizer, it is super-frustrating how siloed the broad EA/rational/AI-alignment/adjacent community is—this specific issue with publication is only one of the consequences. Many of “you people” only interacting between “yourselves” (and I’m not referring to you, Davids), very often even socially. I mean, you guys are trying to do the most good possible, so help others use and leverage on your work! And don’t waste time reinventing what is already common or, at least, what already exists outside. More mixing would also help prevent Leverage-style failures and probably improve what from the outside seems like a very weird and unhealthy “bay area social dynamics” (as put by Kaj here).
I hope you don’t mind if I pop in here. I’ve been following this conversation with considerable interest. I too am an outsider. I’ve been peeking in every now and then for years, but started posting here almost a year ago, more or less, to test the waters. Anyhow, you say:
This should be obvious for everyone! As an outside observer and huge sympathizer, it is super-frustrating how siloed the broad EA/rational/AI-alignment/adjacent community is—this specific issue with publication is only one of the consequences.
Yes! And as you go on to say, it works in both directions too (”...don’t waste time reinventing what is already common...”). There’s breath-taking ignorance of existing work in relevant fields.
For example, around the corner there’s some interesting discussions under the heading of “semiotic physics,” which, as far as I can tell, is the application of complex dynamics to understanding LLMs. Super-important, super-interesting, even to someone like me, who can’t do the math. But the conversations proceed as though no one had ever thought of doing this, which simply is not true. And as far as I can tell, there’s no intention of trying to take this work to the outside world, which is a mistake.
At times it seems like this place is populated by people who think they’re the smartest one in the room, and maybe they were at one time, back in secondary school. But it’s a large world and there are lots of “smartest in the room” people in it. You need to get over it.
My point is not specific to machine learning. I’m not as familiar with other academic communities, but I think most of the time it would probably be worth engaging with them if there is somewhere where your work could fit.
I think I do a lot of “engaging with neuroscientists” despite not publishing peer-reviewed neuroscience papers:
I write lots of blog posts intended to be read by neuroscientists, i.e. I will attempt to engage with background assumptions that neuroscientists are likely to have, not assume non-neuroscience background knowledge or jargon, etc.
[To be clear, I also write even more blog posts that are not in that category.]
When one of my blog posts specifically discusses some neuroscientist’s work, I’ll sometimes cold-email them and ask for pre-publication feedback.
When I have questions about a neuroscientist’s paper, I’ll sometimes cold-email them to try to start a chat.
There are a handful of neuroscientists whose work is unusually relevant to AGI capabilities and/or safety (in my opinion), and I’m kinda always on the lookout for excuses to get in touch with them, with some amount of success I think.
Between those things, plus word-of-mouth, I feel pretty confident that WAY more neuroscientists are familiar with my detailed ideas than is typical given that I’ve been in the field full-time for only 2 years (and spend barely half my time on neuroscience anyway), and also WAY more than the counterfactual where I spend the same amount of time on outreach / communication but do so mainly via publishing peer-reviewed neuroscience papers. Like, sometimes I’ll read a peer-reviewed paper in detail, and talk to the author, and the author remarks that I might be the first person to have ever read it in detail apart from their own close collaborators and the referees.
You’re very unusually proactive, and I think the median member of the community would be far better served if they were more engaged the way you are. Doing that without traditional peer reviewed work is fine, but unusual, and in many ways is more difficult than peer-reviewed publication. And for early career researchers, I think it’s hard to be taken seriously without some more legible record—you have a PhD, but many others don’t.
Needless to say, writing papers and getting them into ML conferences is time-consuming. There’s an opportunity cost. Is it worth doing despite the opportunity cost? I presume that, for the particular people you talked to, and the particular projects they were doing, your judgment was “Yes the opportunity cost was well worth paying”. And I’m in no position to disagree—I don’t know the details. But I wouldn’t want to make any blanket statements. If someone says the opportunity cost is not worth it for them, I see that as a claim that a priori might be true or false. Your post seems to imply that almost everyone is making an error in the same direction, and therefore funders should put their thumb on the scale. That’s at least not obvious to me.
You seem to be suggesting that people in academia don’t read blog posts, and that blog posts are generically harder to read than papers. Both seem obviously false to me; for example, many peer-reviewed ML papers come along with blog posts, and the blog posts are intended to be the more widely accessible of the two.
Of course, blog posts can be unreadable too. Generally, I think that it’s healthy for people to write BOTH (A) stuff with lots of jargon & technical details that conveys information well to people already in the know AND (B) highly-accessible stuff intended for a broader audience. (That’s what I try to do, at least.) I think it’s true and uncontroversial to say that blog posts are great for (B). I also happen to think that blog posts are great for (A).
Anyway, I think this OP isn’t particularly addressed at me (I have nothing I want to share that would fit in at an ML conference, as opposed to neuroscience), but if anyone cares I’d be happy to discuss in detail why I haven’t written any peer-reviewed papers related to my AI alignment work since I started it full-time 2 years ago, and have no immediate plans to, and what I’ve been doing instead to mitigate any downsides of that decision. It’s not even a close call; this decision seems very overdetermined from my perspective.
I do think this is the wrong calculation, and the error caused by it is widely shared and pushes in the same direction.
Publication is a public good, where most of the benefit accrues to others / the public. Obviously costs to individuals are higher than the benefits to them in far more cases than where costs to individuals are higher than the summed benefits to others. And evaluating good accrued to the researchers is the wrong thing to check—if our goal is aligned AI, the question should be the benefit to the field.
If we compare
(A) “actual progress”, versus
(B) “legible signs of progress”,
it seems obvious to me that everyone has an incentive to underinvest in (A) relative to (B). You get grants & jobs & status from (B), not (A), right? And papers can be in (B) without being minimally or not at all in (A).
In academia, people talk all the time about how people are optimizing their publication record to the detriment of field-advancement, e.g. making results sound misleadingly original and important, chasing things that are hot, splitting results into unnecessarily many papers, etc. Right?
Hmm, I’m trying to guess where you’re coming from. Maybe you’d propose a model with
(C) “figuring things out”
(D) “communicating those things to the x-risk community”
(E) “communicating those things to the ML community”
And the idea is that someone whose funding & status is coming entirely from the x-risk community has no incentive to do (E). Is that it?
If so, I strongly endorse that (E) is worth doing, to a nonzero extent. But it’s not obvious to me that the AGI x-risk community is collectively underinvesting in (E); I think I lean towards “overinvesting” on the margin. (I repeat: on the margin!! Zero investment is too little!)
I think that everyone who is both motivated by x-risk and employed by a CS department—e.g. CHAI, the OP (Krueger), etc.—is doing (E) intensively all the time, and will keep doing so perpetually. We don’t have to worry about (E) going to zero. If other people do (D) to the exclusion of (E), I think good ideas will trickle out through the above-mentioned people, and/or through ML people getting interested and gradually learning enough jargon to read the (D) stuff.
I think that, for some people / projects, getting their results into an ML conference would cut down the amount of (C) & (D) that gets done by a factor of 2, or even more, or much more when it affects the choice of what to work on in the first place. And I think that’s a very bad tradeoff.
I would say a similar thing about any technical field. I want climate modelers to spend most of their time figuring out how to do better climate modeling, in collaboration with other climate modelers, using climate modeling jargon. Obviously, to some extent, there has to be accessible communication to a wider audience of stakeholders about what’s going on in climate modeling. But that’s going to happen anyway—plenty of people like writing popular books and blog posts and stuff, and kudos to those people, and likewise some stakeholders outside the field will naturally invest in learning climate modeling jargon and injecting themselves into that conversation, and kudos to those people too. Groupthink is bad, interdisciplinarity is good, and so on, but lots of important technical work just can’t be easily communicated to people with no subject-matter expertise or investment in the subfield, and it’s really really bad if that kind of work falls by the wayside.
Then separately, if people are underinvesting in (E), I think it’s non-obvious (and often false) that the solution to that problem is to try to get papers through peer review and into ML conferences.
For one thing, if an x-risk-concerned person can write an ML paper, they can equally well write a blog post that avoids x-risk jargon (and maybe even replaces it with ML jargon), and I think that would have a comparable chance of getting widely read by ML people and successfully communicating substantive ideas to them. It’s not like every paper in an ML conference gets widely read and cited anyway, right? But blog posts take absurdly less time.
For another thing, if we assume for the sake of argument that the “gravitas” / style / cite-ability of academic papers is a feature not a bug, then people can get those by putting a paper onto ML arxiv, and that takes much less time than going through peer-review. I think in some cases that’s a great choice.
To respond briefly, I think that people underinvest in (D), and write sub-par forum posts rather than aim for the degree of clarity that would allow them to do (E) at far less marginal cost. I agree that people overinvest in (B)[1], but also think that it’s very easy to tell yourself your work is “actual progress” when you’re doing work that, if submitted to peer-reviewed outlets, would be quickly demolished as duplicative of work you’re unaware of, or incompletely thought-out in other ways.
I also worry that many people have never written a peer reviewed paper, and aren’t thinking through the tradeoff, they just never develop the necessary skills, and can’t ever move to more academic outlets[2]. I say all of this as someone who routinely writes for both peer-reviewed outlets and for the various forums—my thinking needs to be clearer for reviewed work, and I agree that the extraneous costs are high, but I think that the tradeoff in terms of getting feedback and providing something for others to build on, especially others outside of the narrow EA-motivated community, is often worthwhile.
Edit to add: But yes, I unambiguously endorse starting with writing Arxiv papers, as they get a lot of the benefit without needing to deal with the costs of review. They do fail to get as much feedback, which is a downside. (It’s also relatively easy to put something on Arxiv and submit to a journal for feedback, and decide whether to finish the process after review.)
Though much of that work—reviews, restatements, etc. can be valuable despite that.
To be fair, I may be underestimating the costs of learning the skills for those who haven’t done this—but I do think there’s tons of peer mentorship within EA which can work to greatly reduce those costs, if people are willing to use those resources.
This should be obvious for everyone! As an outside observer and huge sympathizer, it is super-frustrating how siloed the broad EA/rational/AI-alignment/adjacent community is—this specific issue with publication is only one of the consequences. Many of “you people” only interacting between “yourselves” (and I’m not referring to you, Davids), very often even socially. I mean, you guys are trying to do the most good possible, so help others use and leverage on your work! And don’t waste time reinventing what is already common or, at least, what already exists outside. More mixing would also help prevent Leverage-style failures and probably improve what from the outside seems like a very weird and unhealthy “bay area social dynamics” (as put by Kaj here).
I hope you don’t mind if I pop in here. I’ve been following this conversation with considerable interest. I too am an outsider. I’ve been peeking in every now and then for years, but started posting here almost a year ago, more or less, to test the waters. Anyhow, you say:
Yes! And as you go on to say, it works in both directions too (”...don’t waste time reinventing what is already common...”). There’s breath-taking ignorance of existing work in relevant fields.
For example, around the corner there’s some interesting discussions under the heading of “semiotic physics,” which, as far as I can tell, is the application of complex dynamics to understanding LLMs. Super-important, super-interesting, even to someone like me, who can’t do the math. But the conversations proceed as though no one had ever thought of doing this, which simply is not true. And as far as I can tell, there’s no intention of trying to take this work to the outside world, which is a mistake.
At times it seems like this place is populated by people who think they’re the smartest one in the room, and maybe they were at one time, back in secondary school. But it’s a large world and there are lots of “smartest in the room” people in it. You need to get over it.
It’s a waste of intelligence and creativity.
Thanks, agreed. And as an aside, I don’t think it’s entirely coincidental that neither of the people who agree with you are in the Bay.
My point is not specific to machine learning. I’m not as familiar with other academic communities, but I think most of the time it would probably be worth engaging with them if there is somewhere where your work could fit.
Speaking for myself…
I think I do a lot of “engaging with neuroscientists” despite not publishing peer-reviewed neuroscience papers:
I write lots of blog posts intended to be read by neuroscientists, i.e. I will attempt to engage with background assumptions that neuroscientists are likely to have, not assume non-neuroscience background knowledge or jargon, etc.
[To be clear, I also write even more blog posts that are not in that category.]
When one of my blog posts specifically discusses some neuroscientist’s work, I’ll sometimes cold-email them and ask for pre-publication feedback.
When I have questions about a neuroscientist’s paper, I’ll sometimes cold-email them to try to start a chat.
There are a handful of neuroscientists whose work is unusually relevant to AGI capabilities and/or safety (in my opinion), and I’m kinda always on the lookout for excuses to get in touch with them, with some amount of success I think.
I got interviewed on a popular podcast in AI-adjacent neuroscience, and I have a 1-hour zoom talk that I give whenever anyone invites me.
Between those things, plus word-of-mouth, I feel pretty confident that WAY more neuroscientists are familiar with my detailed ideas than is typical given that I’ve been in the field full-time for only 2 years (and spend barely half my time on neuroscience anyway), and also WAY more than the counterfactual where I spend the same amount of time on outreach / communication but do so mainly via publishing peer-reviewed neuroscience papers. Like, sometimes I’ll read a peer-reviewed paper in detail, and talk to the author, and the author remarks that I might be the first person to have ever read it in detail apart from their own close collaborators and the referees.
You’re very unusually proactive, and I think the median member of the community would be far better served if they were more engaged the way you are. Doing that without traditional peer reviewed work is fine, but unusual, and in many ways is more difficult than peer-reviewed publication. And for early career researchers, I think it’s hard to be taken seriously without some more legible record—you have a PhD, but many others don’t.