The faster cell simulation technologies advance, the weaker is the hardware they’ll run on.
If hardware growth strictly followed Moore’s Law and CPUs (or GPUs, etc.) were completely general-purpose, this would be true. But, if cell simulation became a dominant application for computing hardware, one could imagine instruction set extensions or even entire architecture changes designed around it. Obviously, it would also take some time for software to take advantage of hardware change.
Well, first it has to become dominant enough (for which it’d need to be common enough, for which it needs to be useful enough—used for what?), then the hardware specialization is not easy either, and on top of that specialized hardware locks the designs in (prevents easy modification and optimization). Especially if we’re speaking of specializing beyond how GPUs are specialized for parallel floating point computations.
If hardware growth strictly followed Moore’s Law and CPUs (or GPUs, etc.) were completely general-purpose, this would be true. But, if cell simulation became a dominant application for computing hardware, one could imagine instruction set extensions or even entire architecture changes designed around it. Obviously, it would also take some time for software to take advantage of hardware change.
Well, first it has to become dominant enough (for which it’d need to be common enough, for which it needs to be useful enough—used for what?), then the hardware specialization is not easy either, and on top of that specialized hardware locks the designs in (prevents easy modification and optimization). Especially if we’re speaking of specializing beyond how GPUs are specialized for parallel floating point computations.