Actually, reading the whole of article—it is awfully vague to the point that it is hard to immediately tell if it is about something entirely digital using approximate circuitry (which is also known as having fewer bits), or part analog, or what. It is basically devoid of useful information, there’s just some claims.
The “even though a single transistor can perform, for instance, an approximate exponentiate or logarithm”, strongly suggested some analog circuitry.
On the other hand later in the article he speaks of a fully digital implementation that would store all numbers as logarithms. Well then most of the operations then (*/ square root) are implemented by integer unit of the ordinary CPU, and he only needs the special function to aid addition and subtraction, so why won’t he just note that the only thing he ever needs to do is to add that one function to existing integer CPU? I myself had personally implemented some code that stored fixed-point logarithms, and worked on logarithms of values, because that was a good optimization. I don’t see what is exactly innovative here at all. I’m sure a lot of people in a lot of circumstances used integer CPU working on fixed-point logarithms, to do floating point computations.
I guess I was thrown off by assuming that the approach is indeed novel.
Furthermore, there’s FPGAs for him to program to make a working example within two months if he’s serious.
By the way, with regards to IEEE floating point, there’s been some terrible lobbying for the standard and the standard is bad for both software and hardware (read on denormals).
Actually, reading the whole of article—it is awfully vague to the point that it is hard to immediately tell if it is about something entirely digital using approximate circuitry (which is also known as having fewer bits), or part analog, or what. It is basically devoid of useful information, there’s just some claims.
The “even though a single transistor can perform, for instance, an approximate exponentiate or logarithm”, strongly suggested some analog circuitry.
On the other hand later in the article he speaks of a fully digital implementation that would store all numbers as logarithms. Well then most of the operations then (*/ square root) are implemented by integer unit of the ordinary CPU, and he only needs the special function to aid addition and subtraction, so why won’t he just note that the only thing he ever needs to do is to add that one function to existing integer CPU? I myself had personally implemented some code that stored fixed-point logarithms, and worked on logarithms of values, because that was a good optimization. I don’t see what is exactly innovative here at all. I’m sure a lot of people in a lot of circumstances used integer CPU working on fixed-point logarithms, to do floating point computations.
I guess I was thrown off by assuming that the approach is indeed novel.
Furthermore, there’s FPGAs for him to program to make a working example within two months if he’s serious.
By the way, with regards to IEEE floating point, there’s been some terrible lobbying for the standard and the standard is bad for both software and hardware (read on denormals).