I was encouraged to ask these questions under this post instead of over email so I’ll do so:
I wanted to ask some questions around what papers are within scope.
Firstly, what about things overlapping between Cooperative AI and Agent Foundations? I’ve got a paper on tying game theory together with percolation theory in order to better predict fixed points in complex systems, is this within scope or not?
Secondly, I believe there are various problems in scaling technical single-agent safety to systemic prolems of Gradual Disempowerment and similar. We need to be able to detect agency in larger systems as well and Michael Levin has a bunch of work on trying to establish “diverse intelligence”. Would something that is agent foundations on diverse intelligence be something compatible? A sort of scaling of agent foundations to systemic disciplines like computational social science.
Finally, what about taxonomy papers? Something I’ve noticed being frustrated about is not having a taxonomy of agent definitions and therefore running into issues with specification of language in discussions. An “agent” can mean a lot of things and I was thinking that putting together a taxonomy of what existing fields consider agents might be useful.
I love the idea of ILIAD, I think it’s needed and awesome.
Our general approach to scope is to ask (1) if the topic is worth studying, and (2) if there are no other venues that can offer a substantially better review. If so, we’ll probably say yes. (We generally want to avoid reviewing manuscripts where there are already good existing journals who accept submissions on the topic, e.g., almost all interpretability.) We are willing to go outside our comfort zone to get worthwhile manuscripts reviewed imperfectly if the alternative is they get reviewed nowhere. One advantage of the reviewer abstract idea is that it allows the reviewers to communicate their uncertainty to the potential reader.
Both of the interdisciplinary papers you mention sound fine. In these sorts of cases we may ask the authors to put in special effort in helping us locate qualified (and reasonably unbiased) reviewers.
Review and taxonomy papers are fine, and indeed we’d love to see something that collects and compares various definitions of “agent” in both the conventional lit and the Alignment Forum. For us the question isn’t “Is this novel enough to ‘deserve’ publication?”, it’s “Is this worth writing? Are there at least a few researchers who will find this significantly more useful than what’s already been written?”.
I was encouraged to ask these questions under this post instead of over email so I’ll do so:
I wanted to ask some questions around what papers are within scope.
Firstly, what about things overlapping between Cooperative AI and Agent Foundations? I’ve got a paper on tying game theory together with percolation theory in order to better predict fixed points in complex systems, is this within scope or not?
Secondly, I believe there are various problems in scaling technical single-agent safety to systemic prolems of Gradual Disempowerment and similar. We need to be able to detect agency in larger systems as well and Michael Levin has a bunch of work on trying to establish “diverse intelligence”. Would something that is agent foundations on diverse intelligence be something compatible? A sort of scaling of agent foundations to systemic disciplines like computational social science.
Finally, what about taxonomy papers? Something I’ve noticed being frustrated about is not having a taxonomy of agent definitions and therefore running into issues with specification of language in discussions. An “agent” can mean a lot of things and I was thinking that putting together a taxonomy of what existing fields consider agents might be useful.
I love the idea of ILIAD, I think it’s needed and awesome.
Thanks, we really appreciate the questions.
Our general approach to scope is to ask (1) if the topic is worth studying, and (2) if there are no other venues that can offer a substantially better review. If so, we’ll probably say yes. (We generally want to avoid reviewing manuscripts where there are already good existing journals who accept submissions on the topic, e.g., almost all interpretability.) We are willing to go outside our comfort zone to get worthwhile manuscripts reviewed imperfectly if the alternative is they get reviewed nowhere. One advantage of the reviewer abstract idea is that it allows the reviewers to communicate their uncertainty to the potential reader.
Both of the interdisciplinary papers you mention sound fine. In these sorts of cases we may ask the authors to put in special effort in helping us locate qualified (and reasonably unbiased) reviewers.
Review and taxonomy papers are fine, and indeed we’d love to see something that collects and compares various definitions of “agent” in both the conventional lit and the Alignment Forum. For us the question isn’t “Is this novel enough to ‘deserve’ publication?”, it’s “Is this worth writing? Are there at least a few researchers who will find this significantly more useful than what’s already been written?”.