[Question] How do we identify bottlenecks to scientific and technological progress?

In discussions of AI, nanotechnology, brain-computer interfaces, and genetic engineering, I’ve noticed a common theme of disagreement over the right bottleneck to focus on. The general pattern is that one person or group argues that we know enough about a topic’s foundation that it’s time to start to focus on achieving near-term milestones, often engineering ones. The other group counters by arguing that taking such a milestone/​near-term focused approach is futile because we lack the fundamental insights to achieve the long-term goals we really care about. I’m interested in heuristics we can use or questions we can ask to try and resolve these disagreements. For example, in the recent MIRI update, Nate Soares talks about how we’re not prepared to build an aligned AI because we can’t talk about the topic without confusing ourselves. While this post focuses on capability, not safety, I think the “can we talk about the topic without sounding confused” is a useful heuristic for understanding how ready we are to build independent of safety questions.

What follows are a few links to/​descriptions of concrete examples of this pattern:

  • Disagreements in the ML/​AI world over how far towards AGI deep learning can get us. See, for example, Gary Marcus’s Deep Learning: A Critical Appraisal. While the disagreement isn’t only about chasing near-term goals vs. paradigm shifts, it seems like an important piece.

  • Arguments between “the Drexler camp” and mainstream chemists over how close we are to building assembler-based nanotech. Representative quote from here, “The Center for Responsible Nanotechnology writes ‘A fabricator within a decade is plausible – maybe even sooner’. I think this timeline would be highly implausible even if all the underlying science was under control, and all that remained was the development of the technology. But the necessary science is very far from being understood. “”

  • Ed Boyden’s point in an Edge interview (quote below), which mirrors the Rocket Alignment problem in a number of ways, but contrasts with the optimism of VCs/​entrepreneurs such as Elon Musk and Brian Johnson about the potential for radically transformative brain-computer interfaces to go to market in the next 1-2 decades.

People forget. When they landed on the moon, they already had several hundred years of calculus so they have the math; physics, so they know Newton’s Laws; aerodynamics, you know how to fly; rocketry, people were launching rockets for many decades before the moon landing. When Kennedy gave the moon landing speech, he wasn’t saying, let’s do this impossible task; he was saying, look, we can do it. We’ve launched rockets; if we don’t do this, somebody else will get there first. I anticipate at least one answer to this question will look something like “look at the science and see if you understand the phenomena you which to engineer on top of enough”, but I think this answer doesn’t fully solve the problem. For example, in the case of nanotech, Drexler’s argument centers on the point that successful engineering requires finding one path to success not necessarily understanding the entire space of possible phenomena.

EDIT (01/​02/​2019): I removed references to safety/​alignment after ChristianKI noted that conflating the two makes the question more confusing and John_Maxwell_IV argued that I was misrepresenting his (and likely others’) views on alignment. The post now focuses solely on the question of identifying bottlenecks to progress.