[Question] Why does gradient descent always work on neural networks?

My amateur understanding of neural networks is that they almost always train using stochastic gradient descent. The quality of a neural network comes from its size, shape, and training data, but not from the training function, which is always simple gradient descent.

This is a bit unintuitive to me because gradient descent can only find the minimum of a function if that function is convex, and I wouldn’t expect typical ML problems (e.g., “find the dog in this picture” or “continue this writing prompt”) to have convex cost functions. So why does gradient descent always work?

One explanation I can think of: it doesn’t work if your goal is to find the optimal answer, but we hardly ever want to know the optimal answer, we just want to know a good-enough answer. For example, if a NN is trained to play Go, it doesn’t have to find the best move, it just has to find a winning move. Not sure if this explanation makes sense though.