[Question] (When) do high-dimensional spaces have linear paths down to local minima?

When training (to convergence) using gradient descent in high dimensions, it’s common for there to be monotonically decreasing loss on the linear path between the initialization weights and the local minimum found, even if the actual path followed by training is highly nonlinear. Is this true for high-dimensional spaces in general?

  • Does the energy of a protein conformation decrease monotonically when you move on a linear path in the high-dimensional folding-space (parameterized by the angles between all the amino acids) between the unfolded configuration and the folded configuration?

  • When a food is developed from a simpler food, does an intermediate food halfway in all ingredient- and spice- concentrations taste better than the simpler food?

  • It seems like when going from a random image to an image of a dog on a linear path, the image looks monotonically more like a dog

If this doesn’t usually happen, what’s the underlying fact about the high-dimensional space which determines whether monotonically decreasing loss holds?