Albert: I agree that having made progress on issues like logical induction is impressive and has a solid chance of being very useful for AGI design. And I have a better understanding of your position—sharing deep models of a problem is important. I just think that some other top thinkers will be able to make a lot of the key inferences themselves—look at Stuart Russell for example—and we can help that along by providing funding and infrastructure.
I think the problem isn’t just that other people might not be able to make the key inferences, but that there won’t be common knowledge of the models/assumptions that people have. For example, Stuart Russell has thought a lot about research topics in AI safety, but I’m not actually aware of any write-ups detailing his models of the AI safety landscape and problem. (The best I could find was his “Provably Beneficial AI” Asilomar slides, the 2015 Research Agenda, and his AI FAQ, though all three are intended for a general audience.) It’s possible, albeit unlikely, that he has groked MIRI’s models, and still thinks that value uncertainty is the most important thing to work on (or call for people to work on) for AI safety. But even if this were the case, I’m not sure how we’d find out.
For example, events where top AI researchers in academia are given the space to share models with researchers closer to our community.
I think the problem isn’t just that other people might not be able to make the key inferences, but that there won’t be common knowledge of the models/assumptions that people have. For example, Stuart Russell has thought a lot about research topics in AI safety, but I’m not actually aware of any write-ups detailing his models of the AI safety landscape and problem. (The best I could find was his “Provably Beneficial AI” Asilomar slides, the 2015 Research Agenda, and his AI FAQ, though all three are intended for a general audience.) It’s possible, albeit unlikely, that he has groked MIRI’s models, and still thinks that value uncertainty is the most important thing to work on (or call for people to work on) for AI safety. But even if this were the case, I’m not sure how we’d find out.
Yup. I think this may help resolve the problem.
Huh, I like your point about common knowledge a lot. Will work on that.