The “drilling down along a new and different branch of the tree” concept makes me think of tree search algorithms, naively being depth or breadth first searching. It’s overly simplified, but might uncover related theory.

The goal is to search from whichever node you estimate to being closest to the goal. Calculating the estimate is difficult, so we tend to only look at a small nearby neighbourhood, which is usually low level. Backtracking forces you to make estimates for earlier nodes.

If I was making this algorithm faster, I’d try to find a way to make the heuristic (the estimate of nearness to the goal) more efficient. I’ve no idea how to do that, but maybe looking at how past discoveries were made could help. Then again, given that research takes a long time, maybe it’s not worth making any sacrifices to the heuristic accuracy.

The “drilling down along a new and different branch of the tree” concept makes me think of tree search algorithms, naively being depth or breadth first searching. It’s overly simplified, but might uncover related theory.

The goal is to search from whichever node you estimate to being closest to the goal. Calculating the estimate is difficult, so we tend to only look at a small nearby neighbourhood, which is usually low level. Backtracking forces you to make estimates for earlier nodes.

If I was making this algorithm faster, I’d try to find a way to make the heuristic (the estimate of nearness to the goal) more efficient. I’ve no idea how to do that, but maybe looking at how past discoveries were made could help.

Then again, given that research takes a long time, maybe it’s not worth making any sacrifices to the heuristic accuracy.