Also, we have a huge amount of mental architecture devoted to understanding and remembering spatial relationships of objects (for obvious evolutionary reasons). Using that as a metaphor for purely abstract things allows us to take advantage of that mental architecture to make other tasks easier.
A very structured version of this would be something like a memory palace where you assign ideas to specific locations in a place, but I think we are doing the same thing often when we talk about ideas in spatial relationships, and build loose mental models of them as existing in spatial relationship to one another (or at least I do).
Saying that people would be better off taking more risks under a particular model elides the question of why they don’t take those risks to begin with, and how we can change that. If its desirable to do so.
The psychological impact of a loss of x is generally higher than that of a corresponding gain. So if I know I will feel worse from losing $10 than I will feel good from gaining $100, then its entirely rational in my utility function to not take a 50⁄50 bet between those two outcomes. Maybe I would be better off overall if I didn’t over weight losses, but utility functions aren’t easily rewritable by humans. The closest you could come is some kind of exposure therapy to losses.