Sorry for not checking here before the newsletter went out :/
My estimate is that after reading it, I would gain the impression that the text revolves around the abstract model. Which I thought wasn’t the case; definitely wasn’t the intention.
Hmm, I didn’t mean to imply this.
Also, I am not sure if it is intended that your summary doesn’t mention the examples and the “classifying research questions” subsection (which seems equally important to me as the list it generates).
That was somewhat intended—words are at a premium in the newsletter, so I have to make decisions about what to include. However, given that you find the classification subsection is equally important, I’ll at least add that in.
Finally, from your planned opinion, I might get the impression that the text suggests no technical problems at all.
That’s a fair point, I hadn’t realized that.
I’ve made the following changes to the LW version of the newsletter:
AI Services as a Research Paradigm(Vojta Kovarik) (summarized by Rohin): The CAIS report(AN #40) suggests that future technological development will be driven by systems of AI services, rather than a single monolithic AGI agent. However, there has not been much followup research since the publication of the report. This document posits that this is because the concepts of tasks and services introduced in the report are not amenable to formalization, and so it is hard to do research with them. So, it provides a classification of the types of research that could be done (e.g. do we consider the presence of one human, or many humans?), a list of several research problems that could be tackled now, and a simple abstract model of a system of services that could be built on in future work.
Rohin’s opinion: I was expecting a research paradigm that was more specific to AI, but in reality it is very broad and feels to me like an agenda around “how do you design a good society in the face of technological development”. For example, it includes unemployment, system maintenance, the potential of blackmail, side-channel attacks, prevention of correlated errors, etc. None of this is to say that the problems aren’t important—just that given how broad they are, I would expect that they could be best tackled using many different fields, rather than being important for AI researchers in particular to focus on.
Sorry for not checking here before the newsletter went out :/
Hmm, I didn’t mean to imply this.
That was somewhat intended—words are at a premium in the newsletter, so I have to make decisions about what to include. However, given that you find the classification subsection is equally important, I’ll at least add that in.
That’s a fair point, I hadn’t realized that.
I’ve made the following changes to the LW version of the newsletter: