Analog computing is quite old.
The issue with going analog is that when you have errors and noise as you do calculations, you can’t do long chains of calculations without ending up with just the noise.
It’s analog on the inside. If you chain these, the errors will creep in exactly as indicated in Dmytry’s post. That’s why the suggested domains are such that either it won’t be iterated, or errors are expected by the software, or the errors from this source are much smaller than the other errors in the system.
Yep. Well, one could use shorter chains and then clamp the signal to 0 or 1 then feed it to another short chain, eliminating noise to some extent. Brain does something like that basically. If you have sigmoid functions along the chain you sort of digitalize the signal and get rid of the noise.
You’d lose some performance that way though, unless you wanted to do away with function composition. (That would probably be a bad idea.) It’s not entirely clear (to me) that the cost of making this work in a real-life environment wouldn’t nullify the performance gains.
Especially given that we already do a fair bit of approximation (eg floating-point calculations).
The OP seemed to indicate the errors come from the logarithmic approximation and using orders fewer transistors, forfeiting exactness. What is analogue about this? Or does analogue mean something different than I thought and simply refers to there being error-bars on each calculation?
Here’s the patent, since I couldn’t find any other detailed documentation. It describes two separate implementations:
Digital, storing log(x) as a fixed-point number and performing ordinary digital arithmetic on it.
Analogue, storing x as floating-point with digital sign and exponent but analogue mantissa. It then describes some mixed analogue/digital circuits to perform the requisite arithmetic.
The slides linked in the OP are about the digital one, and only once mention the possibility of analogue as an intuition pump. I don’t know which one the quoted performance numbers are for.
Analog computing is quite old. The issue with going analog is that when you have errors and noise as you do calculations, you can’t do long chains of calculations without ending up with just the noise.
OP link isn’t analog, it’s digital. (There are analog approaches with large speedups, of course.)
It’s analog on the inside. If you chain these, the errors will creep in exactly as indicated in Dmytry’s post. That’s why the suggested domains are such that either it won’t be iterated, or errors are expected by the software, or the errors from this source are much smaller than the other errors in the system.
Yep. Well, one could use shorter chains and then clamp the signal to 0 or 1 then feed it to another short chain, eliminating noise to some extent. Brain does something like that basically. If you have sigmoid functions along the chain you sort of digitalize the signal and get rid of the noise.
That would make things worse, not better. Clamping destroys the noise by adding more, in an amount that ‘happens’ to move the value to an integer...
You’d lose some performance that way though, unless you wanted to do away with function composition. (That would probably be a bad idea.) It’s not entirely clear (to me) that the cost of making this work in a real-life environment wouldn’t nullify the performance gains.
Especially given that we already do a fair bit of approximation (eg floating-point calculations).
The OP seemed to indicate the errors come from the logarithmic approximation and using orders fewer transistors, forfeiting exactness. What is analogue about this? Or does analogue mean something different than I thought and simply refers to there being error-bars on each calculation?
the only way you calculate logarithm or exponent or indeed anything with 1 transistor is by making an analogue circuit.
Here’s the patent, since I couldn’t find any other detailed documentation. It describes two separate implementations:
Digital, storing log(x) as a fixed-point number and performing ordinary digital arithmetic on it.
Analogue, storing x as floating-point with digital sign and exponent but analogue mantissa. It then describes some mixed analogue/digital circuits to perform the requisite arithmetic.
The slides linked in the OP are about the digital one, and only once mention the possibility of analogue as an intuition pump. I don’t know which one the quoted performance numbers are for.