TLDR; you probably already know that 2^10=1024, use this to derive powers of 10 instead of memorizing!
https://en.wikipedia.org/wiki/Renard_series, which were designed in the 1870s to be convenient for the officers and engineers of the French Army without a slide rule or a log table, are based on 5th and 10th roots of 10. 1024 being quite close to 1000 means that is very close to , and this allows you to quickly derive R10 numbers without pen and paper.
I have used the algorithm for so long that it has become almost unconscious so I had Gemini write it out:
Mental Algorithm for R10 Numbers (sorry for poor formatting, it doesn’t copypaste neatly, I only fixed manually where it doesn’t read well)
In the R10 series, every step increases the value by a factor of $\approx 1.26$.
However, since :
This gives you the Golden Rule of R10:
* Add 3 to the Index Multiply Value by 2
* Subtract 3 from the Index Divide Value by 2
### The Algorithm: The Three Strands
To find any R10 number mentally, you don’t calculate them sequentially. Instead, you split the numbers 0–10 into three “strands” based on the anchors you already know: **1**, **8**, and **10**.
#### Strand A: The Powers of 2 (Indices 0, 3, 6, 9)
Start at **1** and double it every 3 steps.
* R10(**0**) = **1.0**
* R10(**3**) = **2.0**
* R10(**6**) = **4.0**
* R10(**9**) = **8.0**
#### Strand B: The Halving from 10 (Indices 10, 7, 4, 1)
Start at **10** and halve it every 3 steps (going backwards).
* R10(**10**) = **10.0**
* R10(**7**) = **5.0**
* R10(**4**) = **2.5**
* R10(**1**) = **1.25**
#### Strand C: The “80% Rule” (Indices 8, 5, 2)
This is the hardest strand because it doesn’t land on a clean integer.
We derive this by starting at R10(9), which we know is **8.0**, and going **down 1 step**.
Mathematically, going down 1 step is dividing by $1.2589...$, which is almost exactly multiplying by **0.8**.
* Start at R10(9) = 8.0.
* **R10(8)** $\approx 8.0 \times 0.8 =$ **6.4** (Anchor)
* Now, apply the “Subtract 3 is Half” rule:
* **R10(5)** $\approx 6.4 / 2 =$ **3.2**
* **R10(2)** $\approx 3.2 / 2 =$ **1.6**
### Summary Table (Mental vs Actual)
By using this mental model (Doubling, Halving, and the 0.8 factor), your approximations are incredibly close to the standard values.
| Index | Mental Derivation | Approx Value | Actual R10 Value |
| :--- | :--- | :--- | :--- |
| **0** | Base | **1.00** | 1.00 |
| **1** | $10 \div 8$ | **1.25** | 1.25 |
| **2** | $3.2 \div 2$ | **1.60** | 1.60 |
| **3** | $1 \times 2$ | **2.00** | 2.00 |
| **4** | $5 \div 2$ | **2.50** | 2.50 |
| **5** | $6.4 \div 2$ | **3.20** | 3.15 |
| **6** | $2 \times 2$ | **4.00** | 4.00 |
| **7** | $10 \div 2$ | **5.00** | 5.00 |
| **8** | $8 \times 0.8$ | **6.40** | 6.30 |
| **9** | $4 \times 2$ | **8.00** | 8.00 |
| **10** | Base | **10.00** | 10.00 |
### Quick Reference for Your Brain
1. **0, 3, 6, 9:** Just say **1, 2, 4, 8**.
2. **1, 4, 7:** Start at 10 and halve backwards ($10 \to 5 \to 2.5 \to 1.25$).
3. **2, 5, 8:** Remember **6.4** (from $8 \times 8$), then halve backwards ().
For the 5th R10 number you can also use the coincidence that square root of 10 is close to pi (used in antiquity to approximate pi), or for the 8th number you can use (I personally have just memorized it in middle school), but neither is really necessary for mental calculations
Thanks, I have seen that and thought about that. The best explanation I have come up with is that Meta is not a really a competitor to Google anymore, they lag way behind (there’s also a hypothesis of “offloading” TPU depreciation from GCP to clients, but that seems less important than profits from the GCP, and a possibility of Meta promising to integrate TPUs and PyTorch).
SemiAnalysis reported that a similar deal might have been considered with OpenAI and Anthropic is actually buying the hardware (in parallel to renting it), but this looks dubious for me, why would they do that, eating into the profit of their cloud services and competing with them? SemiAnalysis offers no explanation