This would give me a 100 fold speed up, on top of the 10 fold speed up, giving me a 1000 fold speed up in total. I won’t claim that this would be a comfortable way to code, but it could be done.
While this seems possible in principle, it doesn’t sound as practical as the approach of massively parallelizing one or a smaller set of projects.
The problem is you write project 1 and then by the time it finishes in say 30 seconds a year has gone by and you have just finished writing code for project 100. The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago. You then make changes and it’s a year turnaround again to test them . . .
I suspect that making a massively parallel compiler/linker/language to help close the speed gap somewhat would be the more effective primary strategy.
My larger point was that the effective speedup across the space of tasks/fields is highly uneven and a superfast thinker would get the most utility out of low-computational occupations that involve abstract thinking, such as writing.
50 years ago computers were much, much slower, but human minds were just as fast as they are today. Was it optimal back then to be a writer rather than a programmer?
If you thought one million times faster than any other human minds, then absolutely. Not an analogy at all. There is no analogy.
The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago.
Yes, I admit it would not be an ideal coding environment, but it could be done. Brain-time is cheap, so you have plenty of cycles to spare relearning your code from scratch after each debug cycle. You also have plenty of time to spare to write immaculate documentation, to ease the relearning process.
I suspect that making a massively parallel compiler/linker/language would be the most effective.
I agree. It would be my first project. Even if it took 100,000 years that’s only a month real time! Hopefully I wouldn’t go insane before finishing =D
While this seems possible in principle, it doesn’t sound as practical as the approach of massively parallelizing one or a smaller set of projects.
The problem is you write project 1 and then by the time it finishes in say 30 seconds a year has gone by and you have just finished writing code for project 100. The problem would be the massive subjective lag for getting any debugging feedback and the overhead of remembering what you were working on a year ago. You then make changes and it’s a year turnaround again to test them . . .
I suspect that making a massively parallel compiler/linker/language to help close the speed gap somewhat would be the more effective primary strategy.
If you thought one million times faster than any other human minds, then absolutely. Not an analogy at all. There is no analogy.
Yes, I admit it would not be an ideal coding environment, but it could be done. Brain-time is cheap, so you have plenty of cycles to spare relearning your code from scratch after each debug cycle. You also have plenty of time to spare to write immaculate documentation, to ease the relearning process.
I agree. It would be my first project. Even if it took 100,000 years that’s only a month real time! Hopefully I wouldn’t go insane before finishing =D