“Publish or Perish” (a quick note on why you should try to make your work legible to existing academic communities)
This is a brief, stylized recounting of a few conversations I had at some point last year with people from the non-academic AI safety community:
Me: you guys should write up your work properly and try to publish it in ML venues.
Them: well that seems like a lot of work and we don’t need to do that because we can just talk to each other and all the people I want to talk to are already working with me.
Me: What about the people who you don’t know who could contribute to this area and might even have valuable expertise? You could have way more leverage if you can reach those people. Also, there is increasing interest from the machine learning community in safety and alignment… because of progress in capabilities people are really starting to consider these topics and risks much more seriously.
Them: okay, fair point, but we don’t know how to write ML papers.
Me: well, it seems like maybe you should learn or hire people to help you with that then, because it seems like a really big priority and you’re leaving lots of value on the table.
Them: hmm, maybe… but the fact is, none of us have the time and energy and bandwidth and motivation to do that; we are all too busy with other things and nobody wants to.
Me: ah, I see! It’s an incentive problem! So I guess your funding needs to be conditional on you producing legible outputs.
Me, reflecting afterwards: hmm… Cynically, not publishing is a really good way to create a moat around your research… People who want to work on that area have to come talk to you, and you can be a gatekeeper. And you don’t have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete...
EtA: In comments, people have described adhering to academic standards of presentation and rigor as “jumping through hoops”. There is an element of that, but this really misses the value that these standards have to the academic community. This is a longer discussion, though...
There are sort of 3 AI safety communities in my account:
1) people in academia
2) people at industry labs who are building big models
3) the rest (alignment forum/less wrong and EA being big components). I’m not sure where to classify new orgs like Conjecture and Redwood, but for the moment I put them here.
I’m referring to the last of these in this case.
I’m not accusing anyone of having bad motivations; I think it is almost always valuable to consider both people’s concious motivations and their incentives (which may be subconscious (EtA: or indirect) drivers of their behavior).
Other communities should be moving to AF style publication, not the other way around. This is how science should be communicated; it has all the virtues of peer review without the massive downsides.
I just moved from neuroscience to publishing on LessWrong. The publishing structure here is far superior to a journal on the whole. Waiting for peer review instead of getting it in comments is an insane slowdown on the exchange of ideas.
Journal articles are discussed by experts in private. Blog posts are discussed in public in the comments. The difference in amount of analysis shared per amount of time is massive.
Issues like mathematical or other rigor are separate issues. Having tags and other sorting systems to distinguish long and rigorous work from quick writeups of simple ideas, points, and results would allow the best of both worlds.
Furthermore, we have known this for some time. In about 2003 exactly this type of publishing was suggested for neuroscience, for the above reasons—and as a way to give credit for review work. Neuroscience won’t switch to it because of cultural lock-in. Don’t give up your great good fortune in not being stuck in an antique system.
I must admit confusion, and a quick googling does not alleviate it;
For those of us outside of academia, what exactly do you mean by “AF style publication”?
Sorry for the obscure reference. Alignment Forum is the professional variant of Less Wrong. It has membership by invitation only, which means you can trust the votes and comments to be better informed, and from real people and not fake accounts.
AF: Alignment Forum
This rubs me the wrong way. Of course, you can make anyone do X, if you make their funding conditional on X. But whether you should do that, that depends on how sure you are that X is more valuable than whatever is the alternative.
There are already thousands of people out there whose funding is conditional on them producing legible outputs. Why is that not enough? What will change if we increase that number by a dozen?
Q: “Why is that not enough?”
A: Because they are not being funded to produce the right kinds of outputs.
Needless to say, writing papers and getting them into ML conferences is time-consuming. There’s an opportunity cost. Is it worth doing despite the opportunity cost? I presume that, for the particular people you talked to, and the particular projects they were doing, your judgment was “Yes the opportunity cost was well worth paying”. And I’m in no position to disagree—I don’t know the details. But I wouldn’t want to make any blanket statements. If someone says the opportunity cost is not worth it for them, I see that as a claim that a priori might be true or false. Your post seems to imply that almost everyone is making an error in the same direction, and therefore funders should put their thumb on the scale. That’s at least not obvious to me.
You seem to be suggesting that people in academia don’t read blog posts, and that blog posts are generically harder to read than papers. Both seem obviously false to me; for example, many peer-reviewed ML papers come along with blog posts, and the blog posts are intended to be the more widely accessible of the two.
Of course, blog posts can be unreadable too. Generally, I think that it’s healthy for people to write BOTH (A) stuff with lots of jargon & technical details that conveys information well to people already in the know AND (B) highly-accessible stuff intended for a broader audience. (That’s what I try to do, at least.) I think it’s true and uncontroversial to say that blog posts are great for (B). I also happen to think that blog posts are great for (A).
Anyway, I think this OP isn’t particularly addressed at me (I have nothing I want to share that would fit in at an ML conference, as opposed to neuroscience), but if anyone cares I’d be happy to discuss in detail why I haven’t written any peer-reviewed papers related to my AI alignment work since I started it full-time 2 years ago, and have no immediate plans to, and what I’ve been doing instead to mitigate any downsides of that decision. It’s not even a close call; this decision seems very overdetermined from my perspective.
I do think this is the wrong calculation, and the error caused by it is widely shared and pushes in the same direction.
Publication is a public good, where most of the benefit accrues to others / the public. Obviously costs to individuals are higher than the benefits to them in far more cases than where costs to individuals are higher than the summed benefits to others. And evaluating good accrued to the researchers is the wrong thing to check—if our goal is aligned AI, the question should be the benefit to the field.
If we compare
(A) “actual progress”, versus
(B) “legible signs of progress”,
it seems obvious to me that everyone has an incentive to underinvest in (A) relative to (B). You get grants & jobs & status from (B), not (A), right? And papers can be in (B) without being minimally or not at all in (A).
In academia, people talk all the time about how people are optimizing their publication record to the detriment of field-advancement, e.g. making results sound misleadingly original and important, chasing things that are hot, splitting results into unnecessarily many papers, etc. Right?
Hmm, I’m trying to guess where you’re coming from. Maybe you’d propose a model with
(C) “figuring things out”
(D) “communicating those things to the x-risk community”
(E) “communicating those things to the ML community”
And the idea is that someone whose funding & status is coming entirely from the x-risk community has no incentive to do (E). Is that it?
If so, I strongly endorse that (E) is worth doing, to a nonzero extent. But it’s not obvious to me that the AGI x-risk community is collectively underinvesting in (E); I think I lean towards “overinvesting” on the margin. (I repeat: on the margin!! Zero investment is too little!)
I think that everyone who is both motivated by x-risk and employed by a CS department—e.g. CHAI, the OP (Krueger), etc.—is doing (E) intensively all the time, and will keep doing so perpetually. We don’t have to worry about (E) going to zero. If other people do (D) to the exclusion of (E), I think good ideas will trickle out through the above-mentioned people, and/or through ML people getting interested and gradually learning enough jargon to read the (D) stuff.
I think that, for some people / projects, getting their results into an ML conference would cut down the amount of (C) & (D) that gets done by a factor of 2, or even more, or much more when it affects the choice of what to work on in the first place. And I think that’s a very bad tradeoff.
I would say a similar thing about any technical field. I want climate modelers to spend most of their time figuring out how to do better climate modeling, in collaboration with other climate modelers, using climate modeling jargon. Obviously, to some extent, there has to be accessible communication to a wider audience of stakeholders about what’s going on in climate modeling. But that’s going to happen anyway—plenty of people like writing popular books and blog posts and stuff, and kudos to those people, and likewise some stakeholders outside the field will naturally invest in learning climate modeling jargon and injecting themselves into that conversation, and kudos to those people too. Groupthink is bad, interdisciplinarity is good, and so on, but lots of important technical work just can’t be easily communicated to people with no subject-matter expertise or investment in the subfield, and it’s really really bad if that kind of work falls by the wayside.
Then separately, if people are underinvesting in (E), I think it’s non-obvious (and often false) that the solution to that problem is to try to get papers through peer review and into ML conferences.
For one thing, if an x-risk-concerned person can write an ML paper, they can equally well write a blog post that avoids x-risk jargon (and maybe even replaces it with ML jargon), and I think that would have a comparable chance of getting widely read by ML people and successfully communicating substantive ideas to them. It’s not like every paper in an ML conference gets widely read and cited anyway, right? But blog posts take absurdly less time.
For another thing, if we assume for the sake of argument that the “gravitas” / style / cite-ability of academic papers is a feature not a bug, then people can get those by putting a paper onto ML arxiv, and that takes much less time than going through peer-review. I think in some cases that’s a great choice.
To respond briefly, I think that people underinvest in (D), and write sub-par forum posts rather than aim for the degree of clarity that would allow them to do (E) at far less marginal cost. I agree that people overinvest in (B), but also think that it’s very easy to tell yourself your work is “actual progress” when you’re doing work that, if submitted to peer-reviewed outlets, would be quickly demolished as duplicative of work you’re unaware of, or incompletely thought-out in other ways.
I also worry that many people have never written a peer reviewed paper, and aren’t thinking through the tradeoff, they just never develop the necessary skills, and can’t ever move to more academic outlets. I say all of this as someone who routinely writes for both peer-reviewed outlets and for the various forums—my thinking needs to be clearer for reviewed work, and I agree that the extraneous costs are high, but I think that the tradeoff in terms of getting feedback and providing something for others to build on, especially others outside of the narrow EA-motivated community, is often worthwhile.
Edit to add: But yes, I unambiguously endorse starting with writing Arxiv papers, as they get a lot of the benefit without needing to deal with the costs of review. They do fail to get as much feedback, which is a downside. (It’s also relatively easy to put something on Arxiv and submit to a journal for feedback, and decide whether to finish the process after review.)
Though much of that work—reviews, restatements, etc. can be valuable despite that.
To be fair, I may be underestimating the costs of learning the skills for those who haven’t done this—but I do think there’s tons of peer mentorship within EA which can work to greatly reduce those costs, if people are willing to use those resources.
My point is not specific to machine learning. I’m not as familiar with other academic communities, but I think most of the time it would probably be worth engaging with them if there is somewhere where your work could fit.
Speaking for myself…
I think I do a lot of “engaging with neuroscientists” despite not publishing peer-reviewed neuroscience papers:
I write lots of blog posts intended to be read by neuroscientists, i.e. I will attempt to engage with background assumptions that neuroscientists are likely to have, not assume non-neuroscience background knowledge or jargon, etc.
[To be clear, I also write even more blog posts that are not in that category.]
When one of my blog posts specifically discusses some neuroscientist’s work, I’ll sometimes cold-email them and ask for pre-publication feedback.
When I have questions about a neuroscientist’s paper, I’ll sometimes cold-email them to try to start a chat.
There are a handful of neuroscientists whose work is unusually relevant to AGI capabilities and/or safety (in my opinion), and I’m kinda always on the lookout for excuses to get in touch with them, with some amount of success I think.
I got interviewed on a popular podcast in AI-adjacent neuroscience, and I have a 1-hour zoom talk that I give whenever anyone invites me.
Between those things, plus word-of-mouth, I feel pretty confident that WAY more neuroscientists are familiar with my detailed ideas than is typical given that I’ve been in the field full-time for only 2 years (and spend barely half my time on neuroscience anyway), and also WAY more than the counterfactual where I spend the same amount of time on outreach / communication but do so mainly via publishing peer-reviewed neuroscience papers. Like, sometimes I’ll read a peer-reviewed paper in detail, and talk to the author, and the author remarks that I might be the first person to have ever read it in detail apart from their own close collaborators and the referees.
You’re very unusually proactive, and I think the median member of the community would be far better served if they were more engaged the way you are. Doing that without traditional peer reviewed work is fine, but unusual, and in many ways is more difficult than peer-reviewed publication. And for early career researchers, I think it’s hard to be taken seriously without some more legible record—you have a PhD, but many others don’t.
See also: Your posts should be on Arxiv
I do agree we’re leaving lots of value on the table and even causing active harm by not writing things up well, at least for Arxiv, for a bunch of reasons including some of the ones listed here.
I thought the response to “Your Posts Should be On Arxiv” was “Arxiv mods have stated pretty explicitly they do not want your posts on Arxiv” (unless you have jumped through a bunch of both effort-hoops and formatting hoops to make them feel like a natural member of the Arxiv-paper class)
And I think the post here is saying that you should jump through those effort and editing hoops far more often than currently occurs.
Yeah, I didn’t mean to be responding to that point one way or another. It just seemed bad to be linking to a post that (seems to still?) communicate false things, without flagging those false things. (post still says “it can be as easy as creating a pdf of your post”, which my impression maybe technically true on rare occasions but basically false in practice?)
That seems right.
I think this point was really overstated. I get the impression the rejected papers were basically turned into the arXiv format as fast as possible and so it was easy for the mods to tell this. However, I’ve seen submissions to cs.LG like this and this that are clearly from the alignment community. These posts are also not stellar by standards of preprint formatting, and were not rejected, apparently
There have also been plenty of other adapatations, ones which were not low-effort. I worked on 2, the Goodhart’s law paper and a paper with Issa Rice on HRAD. Both were very significantly rewritten and expanded into “real” preprints, but I think it was clearly worthwhile.
I don’t understand this part. They don’t have to come talk to you, they just have to follow a link to Alignment Forum to read the research. And aren’t forum posts easier to read than papers on arXiv? I feel like if the moat exists anywhere it is around academic journals which often do not make their papers freely accessible, use more cryptic writing norms and insist on using PDF which are not as user-friendly to read as webpages.
To be sure, I’m not disagreeing with your overall point. It would be great if at least the best research from Alignment Forum/LessWrong were on arXiv or in journals, and I think you’re right we’re leaving value on the table there. I have wondered about if someone just made it their job to do these conversions/submissions for top alignment research on the forums, because there are probably economies of scale for one person doing this vs. every researcher interrupting their work flow to learn how to jump through the hoops of paper conversion/submission.
A lot of work just isn’t made publicly available
When it is, it’s often in the form of ~100 page google docs
Academics have a number of good reasons to ignore things that don’t meet academic standards or rigor and presentation
In my experience people also often know their blog posts aren’t very good.
I think your cynical take is pretty wrong, for the reasons Evan described. I’d add that because of the way academic prestige works, you are vulnerable to having your ideas stolen if you just write them up on LessWrong and don’t publish them. You’ll definitely get fewer citations, less recognition, etc.
I think people’s stated motivations are the real motivations: Jumping through hoops to format your work for academia has opportunity costs and they don’t judge those costs to be worth it.
My point (see footnote) is that motivations are complex. I do not believe “the real motivations” is a very useful concept here.
The question becomes why “don’t they judge those costs to be worth it”? Is there motivated reasoning involved? Almost certainly yes; there always is.
Here are two hypotheses for why they don’t judge those costs to be worth it, each one of which is much more plausible to me than the one you proposed:
(1) The costs aren’t in fact worth it & they’ve reacted appropriately to the evidence.
(2) The costs are worth it, but thanks to motivated reasoning, they exaggerate the costs, because writing things up in academic style and then dealing with the publication process is boring and frustrating.
Seriously, isn’t (2) a much better hypothesis than the one you put forth about moats?
I’m not necessarily saying people are subconsciously trying to create a moat.
I’m saying they are acting in a way that creates a moat, and that enables them to avoid competition, and that more competition would create more motivation for them to write things up for academic audiences (or even just write more clearly for non-academic audiences).
Worth it to whom? And if they did work that’s valuable, how much of that value is lost if others who could benefit don’t see it, because it’s written up only informally or not shared widely?
Worth it to the world/humanity/etc. though maybe some of them are more self-focused.
Probably a big chunk of it is lost for that reason yeah. I’m not sure what your point is, it doesn’t seem to be a reply to anything I said.
I like this because it makes it clear that legibility of results is the main concern. There are certain ways of writing and publishing information that communities 1) and 2) are accustomed to. Writing that way both makes your work more likely to be read, and also incentivizes you to state the key claims clearly (and, when possible, formally), which is generally good for making collaborative progress.
In addition, one good thing to adopt is comparing to prior and related work; the ML community is bad on this front, but some people genuinely do care. It also helps AI safety research to stack.
To avoid this comment section being an echo chamber: you do not have to follow all academic customs. Here is how to avoid some of the harmful ones that are unfortunately present:
Do not compromise on the motivation or related work to make it seem less weird for academics. If your work relies on some LW/AF posts, do cite them. If your work is intended to be relevant for x-risk, say it.
Avoid doing anything if the only person you want to appease with it is an anonymous reviewer.
Never compromise on the facts. If you have results that say some famous prior paper is wrong or bad, say it loud and clear, in papers and elsewhere. It doesn’t matter who you might offend.
AI x-risk research has its own perfectly usable risk sheet you can include in your papers.
And finally: do not publish potentially harmful things just because it benefits science. Science has no moral value. Society gives too much moral credit to scientists in comparison to other groups of people.
I think this describes how Eliezer’s grudge against academia has set back AI alignment (even the parts that aren’t related to his organization, since his cultural influence has made this a wider norm).
Strongly agree. I went in a similar direction here: https://www.lesswrong.com/posts/Tw944k5t6tq82CFNm/portia-s-shortform?commentId=H9ffSnAasjwz8CRRc#H9ffSnAasjwz8CRRc
I hate the academic publishing industry. There is so much about it that is utterly broken. The publishing fees, the access fees, the fucking closed access, the massive delay to publication, the pressure to follow hypes and push out quantity, the pressure to stick to tiny clearly solvable problems rather than bigger and more important underlying issues, to knock out tons of tiny irrelevant papers rather than one proper book, the targeting of subfields rather than an interdisciplinary scope, the fact that peer review is often really not the objective measure of improvement that it should be, but just a way for the reviewer to force your to cite their unrelated stuff and avoid statements they dislike… and yet, the basic idea of peer review and minimum standards and easily citable work and pressure to make things mathematically precise has a hell of a lot going for it. I’ve shown articles from here to people working in machine learning, and they basically went “never heard of the author, I assume there is a reason for that, and that my journals have reasons for rejecting them” or “this is not tied back to code and math sufficiently for me to judge it at all; I can’t even say it is wrong, I am not even sure what it is supposed to mean”. They often get the impression that they need to do a lot of work to turn your paper into something meaningful and useful, and they see no reason to do so for you.
Pragmatically, it is impossible to be heard in scientific circles, where you want and need to be heard, if you do not publish. It disqualifies you from jobs, from funding, from conferences, from simply being read. They won’t be aware you exist, because you do not turn up in their journals, and if they learn that you exist, they likely still won’t engage, because they think that if you had a concrete point and were serious about it, you would have written it in a paper by a journal they trust and that has vetted it, and you haven’t, so you either can’t, lacking skills they consider basic, or you don’t respect them and the topics and approaches they work on, so they do not feel like respecting yours.
I am not saying only consider academic metrics in your actions; after all, academia is not doing well on the questions we care about here. I think pushing out text outside of academic journals, to fellow interested parties, but also to the general public, is fucking important. I think engaging with big questions is important.
But you need the grounding from getting feedback from scientists. They will often be able to tell you very quickly that a cool idea you had simply is not practical on a technical level. Or may point out that they are aware of this concern of yours, though their terminology is different and they are not debating it publicly. Also, if you want to make a difference, you need them to change what they are doing.
I know it is frustrating, and not all of these standards are justified and fair. But they are not arbitrary, they do reflect underlying important standards, and they simply have significant real world importance.
As someone with an academic background now working in mainstream ML research, I strongly endorse the message of this post.
I agree with many on this forum that (a) there is some extra work in writing an article in an “academic” style, and (b) academic articles are often written with the objective of impressing a reviewer rather than being completely transparent about what was achieved in the work and how it advances the state of knowledge. I was incredibly frustrated by both of these issues when I was a grad student. However, when I try to read research written on LessWrong, it often seems even more difficult to understand. I’ve heard from others working in ML but not well-versed in safety that they have had similar experiences.
Another commenter complained about papers that “getting them into ML conferences is time-consuming.” Definitely agree, and conference reviewing in ML is atrocious. But you can avoid all of that work just by putting your work on arXiv instead of aiming for an ML conference.