Insight:Increasing stack size enables writing algorithms in their natural recursive form without artificial limits. Many algorithms are most clearly expressed as non-tail-recursive functions; large stacks (e.g., 32GB) make this practical for experimental and prototype code where algorithmic clarity matters more than micro-optimization.
Virtual memory reservation is free. Setting a 32GB stack costs nothing until pages are actually touched.
Stack size limits are OS policy, not hardware. The CPU has no concept of stack bounds—just a pointer register and convenience instructions.
Large stacks have zero performance overhead from the reservation. Real recursion costs: function call overhead, cache misses, TLB pressure.
Conventional wisdom (“don’t increase stack size”) protects against: infinite recursion bugs, wrong tool choice (recursion where iteration is better), thread overhead at scale (thousands of threads).
Ignore the wisdom when: single-threaded, interactive debugging available, experimental code where clarity > optimization, you understand the actual tradeoffs.
Note: Stack memory commits permanently. When deep recursion touches pages, OS commits physical memory. Most runtimes never release it (though it seems it wouldn’t be hard to do with madvise(MADV_DONTNEED)). One deep call likely permanently commits that memory until process death. Large stacks are practical only when: you restart regularly, or you accept permanent memory commitment up to maximum recursion depth ever reached.
Large Stacks: Increasing Algorithmic Clarity
Insight: Increasing stack size enables writing algorithms in their natural recursive form without artificial limits. Many algorithms are most clearly expressed as non-tail-recursive functions; large stacks (e.g., 32GB) make this practical for experimental and prototype code where algorithmic clarity matters more than micro-optimization.
Virtual memory reservation is free. Setting a 32GB stack costs nothing until pages are actually touched.
Stack size limits are OS policy, not hardware. The CPU has no concept of stack bounds—just a pointer register and convenience instructions.
Large stacks have zero performance overhead from the reservation. Real recursion costs: function call overhead, cache misses, TLB pressure.
Conventional wisdom (“don’t increase stack size”) protects against: infinite recursion bugs, wrong tool choice (recursion where iteration is better), thread overhead at scale (thousands of threads).
Ignore the wisdom when: single-threaded, interactive debugging available, experimental code where clarity > optimization, you understand the actual tradeoffs.
Note: Stack memory commits permanently. When deep recursion touches pages, OS commits physical memory. Most runtimes never release it (though it seems it wouldn’t be hard to do with
madvise(MADV_DONTNEED)
). One deep call likely permanently commits that memory until process death. Large stacks are practical only when: you restart regularly, or you accept permanent memory commitment up to maximum recursion depth ever reached.