The first response is what I’m calling the epsilon fallacy. (If you know of an existing and/or better name for this, let me know!)
This reminds me of Amdahl’s Law. You could call it Amdahl’s fallacy, but I’m not sure if it is a better name.
One common failure mode I’ve noticed in myself is taking breaks. After some productive work, I sometimes intend to take a 5 min or 10 min break, but I end up never returning in the specified time. In fact, I sometimes take several days to get back on to the task at hand.
It’s like Zeno’s paradox kicks in every time you try to start afresh after a break.
I’ve previously tried to avoid taking breaks in the first place ― and work in three hour sessions, but I wasn’t consistent enough to do this everyday.
I’ve had trouble making up my mind about Jordan Peterson, and this post was enormously helpful in clarifying my thinking about him. Also:
A new expansion just came out for the Civilization 6 video game, and instead of playing it I’m nine hours into writing this post and barely halfway done. I hope I’m not the only one getting some meaning out of this thing.
This resulted in me updating heavily for the amount of effort involved in writing great content.
I don’t know if this is what you read, but this reminds me of Bell Labs:
ONE element of his approach was architectural. He personally helped design a building in Murray Hill, N.J., opened in 1941, where everyone would interact with one another. Some of the hallways in the building were designed to be so long that to look down their length was to see the end disappear at a vanishing point. Traveling the hall’s length without encountering a number of acquaintances, problems, diversions and ideas was almost impossible. A physicist on his way to lunch in the cafeteria was like a magnet rolling past iron filings.
― New York Times
I had worded it somewhat poorly, I wasn’t intending to say that Steve Jobs should have attempted a lower level analysis in technology design.
I just found it unconvincing in the sense that I couldn’t think of an example where applying lower level intuitions was a strategic mistake for me in particular. As you mention in your other comment, I am not substantially more certain that my high-level intuition is well-honed in any particular discipline.
More generally, Steve Jobs’ consistently applied high-level intuition to big life decisions too ― as evidenced by his commencement speech. It on the whole worked out for him I guess, but he also did try to cure his cancer with alternative medicine which he later regretted.
I completely agree with your computational tradeoff comment though.
I reflexively tried to reverse the advice, and found it surprisingly hard to think of situations where applying higher level intuition would be better.
There’s an excerpt by chess GM Michael Tal:
We reached a very complicated position where I was intending to sacrifice a knight. The sacrifice was not obvious; there was a large number of possible variations; but when I began to study hard and work through them, I found to my horror that nothing would come of it. Ideas piled up one after another. I would transport a subtle reply by my opponent, which worked in one case, to another situation where it would naturally prove to be quite useless. As a result my head became filled with a completely chaotic pile of all sorts of moves, and the infamous “tree of variations”, from which the chess trainers recommend that you cut off the small branches, in this case spread with unbelievable rapidity.
And then suddenly, for some reason, I remembered the classic couplet by Korney Ivanovic Chukovsky: “Oh, what a difficult job it was. To drag out of the marsh the hippopotamus”. I don’t know from what associations the hippopotamus got into the chess board, but although the spectators were convinced that I was continuing to study the position, I, despite my humanitarian education, was trying at this time to work out: just how WOULD you drag a hippopotamus out of the marsh ? I remember how jacks figured in my thoughts, as well as levers, helicopters, and even a rope ladder. After a lengthy consideration I admitted defeat as an engineer, and thought spitefully to myself: “Well, just let it drown!” And suddenly the hippopotamus disappeared. Went right off the chessboard just as he had come on … of his own accord!
And straightaway the position did not appear to be so complicated. Now I somehow realized that it was not possible to calculate all the variations, and that the knight sacrifice was, by its very nature, purely intuitive. And since it promised an interesting game, I could not refrain from making it.
But this is a somewhat contrived example since this is reminiscent of the pre-rigor, rigor, and post-rigor phases of Mathematics (or more generally, in mastering any skill). And one could argue chess GMs have so thoroughly mastered the lower levels that they can afford to skip them without making catastrophic errors.
Another example that comes to mind is Marc Andreessen in the introduction to Breaking Smart:
In 2007, right before the first iPhone launched, I asked Steve Jobs the obvious question: The design of the iPhone was based on discarding every physical interface element except for a touchscreen. Would users be willing to give up the then-dominant physical keypads for a soft keyboard?
His answer was brusque: “They’ll learn.”
It seems quite clear that Jobs wasn’t applying intuition at the lowest level here. And it seems like the end result could have ended up worse off if he ended up applying intuition at lower levels. He even explicitly says:
You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something—your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.
I find neither examples I came up with convincing. But are there circumstances where applying intuition at lower levels is a strategic mistake?
LessWrong also has an existing slack channel, I don’t know if it is active ― I sent a private message to Elo on the old LessWrong to get an invite. It was created in 2015, back then only way to join was an email invite ― but now it is possible to get an invite link.
If I get an invite, I’ll try to convince Elo to install the donut.ai plugin and tell him to give out an invite link. I was about to create a new slack channel, but I remembered this relevant xkcd.
Thanks for your input!
You are correct ― scheduling is a problem. Perhaps we can get around that by building something like Omegle but with only rationalists in it. It shouldn’t be too hard to hack together something with WebRTC to create some sort of a chat room where you are automatically matched with strangers, and can video chat with them.
Sanity checks are usually pretty easy to do, but if you can’t do them, then this strategy just won’t work.
I concede that Bitcoin is pretty easy to understand and sanity check (merkle trees aren’t that hard to wrap your head around ― I would have invested in Bitcoin in 2012 when I heard about it, but I was in high school and had no disposable income). But sanity checking Tezos is much harder:
It turns out that a silver bullet for chain validation is right on the horizon and under active research: recursive SNARKs. SNARKs, which stands for succinct non-interactive zero-knowledge proofs of knowledge are the technology used in Zcash for protecting the privacy of transactions (if you’re already objecting that SNARKs require a trusted setup, please bear with us, we have good news for you).
… This very counter intuitive possibility is a consequence of the PCP theorem. Rather than try to engage in economic “bets” that the transaction has been properly validated we can obtain true cryptographic assurance.
― Scaling Tezos
I don’t know enough to sanity check their scaling strategy or what makes Tezos unique in this respect. Even Ethereum itself is experimenting with zk-SNARKS and Vitalik Buterin wrote a series of articles explaning SNARKS on his medium blog which starts off with an article that says:
You’re not expected to understand everything here the first time you read it, or even the tenth time; this stuff is genuinely hard. But hopefully this article will give you at least a bit of an idea as to what is going on under the hood.
― Exploring Elliptic Curve Pairings
So far Tezos’ Unique Selling Point seems to be that they’ve used OCaml to implement some way of doing formal verification on smart contracts. But this alone seems like insufficient evidence to conclude that Tezos is shiny. To make matters worse, there seems to be some internal conflict between the co-founders of Tezos.
I don’t know how I could possibly successfully sanity check Tezos without understanding SNARKS and reading their whitepaper (both of which require non-trivial prerequisites).
Perhaps you have the background or access to people who have the background to evaluate Tezos properly. But I certainly do not and I would argue the cost is high enough.
By that time I had sufficient interest in crypto to take the time to read and understand what it was about, and how it was different.
I think you’re underestimating the amount of insider knowledge you’ve gained and the cost of attaining that insider knowledge. Eliezer is certainly surrounded by smart people and MIRI received half of its donations in crypto. Yet, they still did not invest in crypto. I think this is because they lacked deep insider knowledge about cryptocurrencies.
I think what usually happens is ― a rationalist hears about shiny thing X, investigates it thoroughly, concludes that in fact X is shiny, then takes action. There are a lot of things worth investigating at any given point ― VR, homotopy type theory, blockchain based technologies, genetics, deep learning, functional programming and so on. Unfortunately, you have to invest a lot of time to gain insider knowledge in any of these things.
I think people get stuck in the investigation process and don’t proceed further due to the temporal cost of attaining that knowledge.
In aggregate, take their ideas seriously even when they might not take their own ideas seriously.
To a crypto outsider like myself, Tezos still feels like another ICO scam. I googled Tezos for a while and still do not have any idea why this has a strong “signal”. I believe you’re advocating investing some money in Tezos since smart people think it’s cool and skip the investigation step.
I’m still confused about whether it is rational to skip the investigation step and blindly invest in Tezos without understanding how it is different.