I have a strong inside view of the alignment problem and what a solution would look like. The main reason why I don’t have an as concrete inside view AI timeline is because I don’t know enough about ML and I have to defer to get a specific decade. The biggest gap in my model of the alignment problem is what a solution to inner misalignment would look like, although I think it would be something like trying to find a way to avoid wireheading.
My bad. I’m glad to hear you do have an inside view of the alignment problem.
If knowing enough about ML is your bottleneck, perhaps that’s something you can directly focus on? I don’t expect it to be hard for you—perhaps only about six months—to get to a point where you have coherent inside models about timelines.