To add to your list of various alternatives… My personal skepticism re SI is the apparent lack of any kind of “Friendly AI roadmap”, or at least nothing that I could easily find on the SI site or here. (Could be my sub-par search skills, of course.)
That’s a different task… I’d expect to see something like “in phase one we plan to do this during this timeframe, next, depending on the outcome of phase one, we plan to proceed along the following lines, which we expect will take from m to n years...”, rather than a comprehensive list of all open problems. The latter is hard and time consuming, the former is something that should not take longer than a page written down, at least as a first draft.
I don’t think it would be reasonable to develop such a roadmap at this point, given that it would require having a relatively high certainty in a specific plan. But given that it’s not yet clear whether the best idea is to proceed on being the first one to develop FAI, or to pursue one of the proposals listed in the OP, or to do something else entirely, and furthermore it’s not even clear how long it will take to figure that out, such specific roadmaps seem impossible.
Apparently they are figuring out that roadmap, a problem with any research, even non-academic ones. I assume the sub-topics discussed so far are central points in question, but if SI keep in secret some insights, well, it’s for the sake of security, or lack of transparency. In some cases, people bothers to much about the institution.
To add to your list of various alternatives… My personal skepticism re SI is the apparent lack of any kind of “Friendly AI roadmap”, or at least nothing that I could easily find on the SI site or here. (Could be my sub-par search skills, of course.)
I hear Eliezer is planning to start writing a sequence on open problems in Friendly AI soon.
That’s a different task… I’d expect to see something like “in phase one we plan to do this during this timeframe, next, depending on the outcome of phase one, we plan to proceed along the following lines, which we expect will take from m to n years...”, rather than a comprehensive list of all open problems. The latter is hard and time consuming, the former is something that should not take longer than a page written down, at least as a first draft.
I don’t think it would be reasonable to develop such a roadmap at this point, given that it would require having a relatively high certainty in a specific plan. But given that it’s not yet clear whether the best idea is to proceed on being the first one to develop FAI, or to pursue one of the proposals listed in the OP, or to do something else entirely, and furthermore it’s not even clear how long it will take to figure that out, such specific roadmaps seem impossible.
And this vagueness pattern-matched perfectly to various failed undertakings, hence my skepticism.
In my model, it also pattern-matches with “Fundamental research that eventually gave us Motion, Thermodynamics, Relativity, Transistors, etc.”
This looks somewhat like what you’re asking for, although it does leave a bit to be desired.
No, it does not at all look like a roadmap. This is a roadmap. Concrete measurable goals. The strategic plan has no milestones and no timelines.
Timelines don’t work very well on such slippery topics. You work at it until you’re done. Milestones are no less necessary, for sure.
Apparently they are figuring out that roadmap, a problem with any research, even non-academic ones. I assume the sub-topics discussed so far are central points in question, but if SI keep in secret some insights, well, it’s for the sake of security, or lack of transparency. In some cases, people bothers to much about the institution.