Computers now are mostly semiconductors doing logic operations. There are, of course, other parts, but they are mostly structural, not doing actual computation.
But imagine computer history took a different route: you could buy different units with different physics doing different calculations. You could buy a module with a laser and liquid crystal screen doing a Fourier transform. You could buy a module with tiny beads doing gravity sort. I could think of more examples, but I think you got the idea.
Maybe it’s not going to work because it’s much easier economically to set up a unified manufacturing pipeline and focus on speeding up those general-purpose computers than setting up many specialized pipelines for specialized computations? Am I just describing the pre-digital era with mechanical integrators and various radio schemes? Maybe what I’m trying to describe took the form of much more easily shareable libraries?
And, of course, there are examples of different physics used to build computers (quantum computers being the most famous example), but my intuition suggests that giant amounts of transistors shouldn’t be the fastest way to compute almost everything, and I don’t observe enough variety that this same intuition would suggest.
Those, of course, are silly examples which wouldn’t actually work; they are here just to point out that different possibilities exist. The general idea is that you could outsource your math to physics in more than one way.
[Question] Could we go another route with computers?
Computers now are mostly semiconductors doing logic operations. There are, of course, other parts, but they are mostly structural, not doing actual computation.
But imagine computer history took a different route: you could buy different units with different physics doing different calculations. You could buy a module with a laser and liquid crystal screen doing a Fourier transform. You could buy a module with tiny beads doing gravity sort. I could think of more examples, but I think you got the idea.
Maybe it’s not going to work because it’s much easier economically to set up a unified manufacturing pipeline and focus on speeding up those general-purpose computers than setting up many specialized pipelines for specialized computations? Am I just describing the pre-digital era with mechanical integrators and various radio schemes? Maybe what I’m trying to describe took the form of much more easily shareable libraries?
And, of course, there are examples of different physics used to build computers (quantum computers being the most famous example), but my intuition suggests that giant amounts of transistors shouldn’t be the fastest way to compute almost everything, and I don’t observe enough variety that this same intuition would suggest.
Those, of course, are silly examples which wouldn’t actually work; they are here just to point out that different possibilities exist. The general idea is that you could outsource your math to physics in more than one way.