Hard to describe exactly, but I’ll take a stab at it. Digital computers are different from mechanical adders because they implement an algorithm which is very ‘general’ and can be very easily and rapidly configured to follow a wide set of other algorithms. This can be used to run simulations of things. This unusual flexibility is seen to a much lesser extent in the brain because there are many physical restrictions on the changes that can be made to the algorithm being run by the brain. Similarly, you can physically modify a mechanical adder machine to change its algorithm to no longer be ‘adding’, but switching it to a different algorithm isn’t easy the way switching the algorithm running on the meta-level of the digital computer is.
Something something… degrees of freedom… markov blankets… meta level programming.… mumble mumble...
An algorithm is an information-processing task, i.e. something that a Turing machine could do.
Sometimes there’s a machine that’s purpose-built to instantiate a particular algorithm, by its construction. If you don’t like the mechanical adder example, here’s another: there’s an algorithm to increment a calendar date, including how many days are in each month and figuring out whether it’s a leap-year or not; and certain fancy watches have gear-based mechanisms that will instantiate that algorithm (and they can get the right answer for centuries, even with all the weird leap-year rules).
Another example would be a special-purpose ASIC, like the tiny high-speed over-voltage-protection IC that you can find in a cell phone. In many cases, these kinds of chips have no “hardware vs software”, they’re not reprogrammable at all, they just have one particular thing that they do, and that design functionality is “burned in” via the placement of wires and logic gates.
IIUC, very early chips were all like that: there was a particular thing that people designed the chip to do, and they burned in that algorithm via the physical construction of the chip. …And then by the 1970s people came up with the idea that it would save a lot of design time, and help economies of scale, to make a smaller number of reprogrammable chips (I believe the Intel 4004 was a pioneering early example). Such a chip is still “a machine that runs an algorithm”, but the algorithm that the machine runs involves reading another arbitrary algorithm from memory and then doing whatever it says. It’s kinda confusing to think about a big unchangeable burned-in algorithm that finds and runs an arbitrary smaller algorithm nested inside … so instead, we don’t normally think of these reprogrammable chips as “a machine that runs an algorithm”, but rather we use terms like “hardware” and “software” etc.
While I’m not disputing the substance of what you are saying here (besides the 4004 timeline), from a computer science perspective I am a bit annoyed at the terminology. A machine that can load computational instructions from a storage medium would traditionally be called a programmable computer, whereas the system you describe “a machine that runs an algorithm” is just precisely a computer. I understand that this is not a nuance represented in more popular terminology, but I feel an article that is precise about the difference could benefit from also using the more precise terminology.
They were not; computers were programmable long before that. Before the 4004, the functionality that we today find on a CPU were distributed over a larger collection of circuitry, with different separate components for the ALU and the instruction interpreter and the register bank and the memory controller and such. But that assembly was already programmable and functioned as a Turing universal computer since the late 40s. The innovation of the intel 4004 was that it was the first design that had all that machinery on a single integrated chip (the first CPU as we might understand it today, in the sense of being the first central processing unit—earlier designs were decentralized, though the term “CPU” was already in use before then).
What does that even mean?
Hard to describe exactly, but I’ll take a stab at it. Digital computers are different from mechanical adders because they implement an algorithm which is very ‘general’ and can be very easily and rapidly configured to follow a wide set of other algorithms. This can be used to run simulations of things. This unusual flexibility is seen to a much lesser extent in the brain because there are many physical restrictions on the changes that can be made to the algorithm being run by the brain. Similarly, you can physically modify a mechanical adder machine to change its algorithm to no longer be ‘adding’, but switching it to a different algorithm isn’t easy the way switching the algorithm running on the meta-level of the digital computer is.
Something something… degrees of freedom… markov blankets… meta level programming.… mumble mumble...
An algorithm is an information-processing task, i.e. something that a Turing machine could do.
Sometimes there’s a machine that’s purpose-built to instantiate a particular algorithm, by its construction. If you don’t like the mechanical adder example, here’s another: there’s an algorithm to increment a calendar date, including how many days are in each month and figuring out whether it’s a leap-year or not; and certain fancy watches have gear-based mechanisms that will instantiate that algorithm (and they can get the right answer for centuries, even with all the weird leap-year rules).
Another example would be a special-purpose ASIC, like the tiny high-speed over-voltage-protection IC that you can find in a cell phone. In many cases, these kinds of chips have no “hardware vs software”, they’re not reprogrammable at all, they just have one particular thing that they do, and that design functionality is “burned in” via the placement of wires and logic gates.
IIUC, very early chips were all like that: there was a particular thing that people designed the chip to do, and they burned in that algorithm via the physical construction of the chip. …And then by the 1970s people came up with the idea that it would save a lot of design time, and help economies of scale, to make a smaller number of reprogrammable chips (I believe the Intel 4004 was a pioneering early example). Such a chip is still “a machine that runs an algorithm”, but the algorithm that the machine runs involves reading another arbitrary algorithm from memory and then doing whatever it says. It’s kinda confusing to think about a big unchangeable burned-in algorithm that finds and runs an arbitrary smaller algorithm nested inside … so instead, we don’t normally think of these reprogrammable chips as “a machine that runs an algorithm”, but rather we use terms like “hardware” and “software” etc.
Does that help?
While I’m not disputing the substance of what you are saying here (besides the 4004 timeline), from a computer science perspective I am a bit annoyed at the terminology. A machine that can load computational instructions from a storage medium would traditionally be called a programmable computer, whereas the system you describe “a machine that runs an algorithm” is just precisely a computer. I understand that this is not a nuance represented in more popular terminology, but I feel an article that is precise about the difference could benefit from also using the more precise terminology.
Thanks, I didn’t know that. I just added that as a footnote at the top of the post.
Wow are you saying Universal Computers were only built in the 70s? Half a century after Turing?
They were not; computers were programmable long before that. Before the 4004, the functionality that we today find on a CPU were distributed over a larger collection of circuitry, with different separate components for the ALU and the instruction interpreter and the register bank and the memory controller and such. But that assembly was already programmable and functioned as a Turing universal computer since the late 40s. The innovation of the intel 4004 was that it was the first design that had all that machinery on a single integrated chip (the first CPU as we might understand it today, in the sense of being the first central processing unit—earlier designs were decentralized, though the term “CPU” was already in use before then).