“Publish or Perish” (a quick note on why you should try to make your work legible to existing academic communities)

This is a brief, stylized recounting of a few conversations I had at some point last year with people from the non-academic AI safety community:[1]

Me: you guys should write up your work properly and try to publish it in ML venues.

Them: well that seems like a lot of work and we don’t need to do that because we can just talk to each other and all the people I want to talk to are already working with me.

Me: What about the people who you don’t know who could contribute to this area and might even have valuable expertise? You could have way more leverage if you can reach those people. Also, there is increasing interest from the machine learning community in safety and alignment… because of progress in capabilities people are really starting to consider these topics and risks much more seriously.

Them: okay, fair point, but we don’t know how to write ML papers.

Me: well, it seems like maybe you should learn or hire people to help you with that then, because it seems like a really big priority and you’re leaving lots of value on the table.

Them: hmm, maybe… but the fact is, none of us have the time and energy and bandwidth and motivation to do that; we are all too busy with other things and nobody wants to.

Me: ah, I see! It’s an incentive problem! So I guess your funding needs to be conditional on you producing legible outputs.

Me, reflecting afterwards: hmm… Cynically,[2] not publishing is a really good way to create a moat around your research… People who want to work on that area have to come talk to you, and you can be a gatekeeper. And you don’t have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete...

EtA: In comments, people have described adhering to academic standards of presentation and rigor as “jumping through hoops”. There is an element of that, but this really misses the value that these standards have to the academic community. This is a longer discussion, though...

  1. ^

    There are sort of 3 AI safety communities in my account:
    1) people in academia
    2) people at industry labs who are building big models
    3) the rest (alignment forum/​less wrong and EA being big components). I’m not sure where to classify new orgs like Conjecture and Redwood, but for the moment I put them here.

    I’m referring to the last of these in this case.

  2. ^

    I’m not accusing anyone of having bad motivations; I think it is almost always valuable to consider both people’s concious motivations and their incentives (which may be subconscious (EtA: or indirect) drivers of their behavior).