I meant, we don’t so much predict ‘the future world’ as the changes to it, to cut on the amount that we need to simulate.
Is it even worth doing? I do not know—which is why I propose profiling before optimising.
What if I know? I am a software developer. I propose less expensive method for deciding on the algorithmic optimizations: learn from existing software such as chess AIs (which are packed with algorithmic optimizations).
edit: also, you won’t learn from profiling that high level optimization is worth doing. Suppose you write a program that eliminates duplicate entries from a file, and you did it the naive way: comparing each to each, O(n^2) . You may find out via profiling whenever most of the time is spent reading the entries, or comparing them, and you may spend time optimizing those, but you won’t learn that you can sort entries first to eliminate the duplicates efficiently. Same goes for things like e.g. raytracers in computer graphics. Practical example from a programming contest: the contestants had 10 seconds to render image with a lot of light reflection inside ellipsoids (the goal was accuracy of output). The reference image was done using straightforward photon mapping—randomly shot photons from light sources—run over time of ~10 hours. The noise is proportional to 1/sqrt(n) ; it converges slowly. The top contestants, myself included, fired photons in organized patterns; the result converged as 1/n . The n being way large even in single second, the contestants did beat the contest organizer’s reference image by far. It would of took months for the contest organizers solution to beat result of contestants in 10 seconds. (edit: the contest sort of failed in result though because the only way to rank images was to compare them to contest organizer’s solution)
The profiler—well, sure, the contest organizers could of ran profiler instead of ‘optimizing prematurely’, and could of found out that their refraction is where they spent most time (or ray ellipsoid intersection or whatever else), and they could of optimized those, for unimportant speed gain. The truth is, they did not even know that their method was too slow, without seeing the superior method (they wouldn’t even have thought so if told, nor could have been convinced with the reasoning that the contestants had used to determine the method to use).
I meant, we don’t so much predict ‘the future world’ as the changes to it, to cut on the amount that we need to simulate.
What if I know? I am a software developer. I propose less expensive method for deciding on the algorithmic optimizations: learn from existing software such as chess AIs (which are packed with algorithmic optimizations).
edit: also, you won’t learn from profiling that high level optimization is worth doing. Suppose you write a program that eliminates duplicate entries from a file, and you did it the naive way: comparing each to each, O(n^2) . You may find out via profiling whenever most of the time is spent reading the entries, or comparing them, and you may spend time optimizing those, but you won’t learn that you can sort entries first to eliminate the duplicates efficiently. Same goes for things like e.g. raytracers in computer graphics. Practical example from a programming contest: the contestants had 10 seconds to render image with a lot of light reflection inside ellipsoids (the goal was accuracy of output). The reference image was done using straightforward photon mapping—randomly shot photons from light sources—run over time of ~10 hours. The noise is proportional to 1/sqrt(n) ; it converges slowly. The top contestants, myself included, fired photons in organized patterns; the result converged as 1/n . The n being way large even in single second, the contestants did beat the contest organizer’s reference image by far. It would of took months for the contest organizers solution to beat result of contestants in 10 seconds. (edit: the contest sort of failed in result though because the only way to rank images was to compare them to contest organizer’s solution)
The profiler—well, sure, the contest organizers could of ran profiler instead of ‘optimizing prematurely’, and could of found out that their refraction is where they spent most time (or ray ellipsoid intersection or whatever else), and they could of optimized those, for unimportant speed gain. The truth is, they did not even know that their method was too slow, without seeing the superior method (they wouldn’t even have thought so if told, nor could have been convinced with the reasoning that the contestants had used to determine the method to use).