This is fascinating and I’d love to hear more depth on whatever you’d be willing to share.
Regarding the suggestion to start with something small, I think in hindsight it was kind of a manipulation on my part to make the tool seem safer and to try to get more people to try it. In my limited experience, internal conflicts that seem small rarely turn out to be.
When I first tried IDC at CFAR, the initial “small starting point” of “Should I floss?” dredged up a whole complex about distrust of doctors in particular and authority in general. A typical experience with watching myself and others IDC is that regardless of the starting point, one ends up in a grand dramatic battle of angels and demons over one’s soul.
Thanks for reminding me about this talk! I read it one more time just now and was struck by passages that I completely missed the first couple times:
Ed David was concerned about the general loss of nerve in our society. It does seem to me that we’ve gone through various periods. Coming out of the war, coming out of Los Alamos where we built the bomb, coming out of building the radars and so on, there came into the mathematics department, and the research area, a group of people with a lot of guts. They’ve just seen things done; they’ve just won a war which was fantastic. We had reasons for having courage and therefore we did a great deal. I can’t arrange that situation to do it again. I cannot blame the present generation for not having it, but I agree with what you say; I just cannot attach blame to it. It doesn’t seem to me they have the desire for greatness; they lack the courage to do it.
It seems an optimistic note, that some of what one lacks in ability or work ethic, one can make up for with courage, which one can train. And also:
For myself I find it desirable to talk to other people; but a session of brainstorming is seldom worthwhile. I do go in to strictly talk to somebody and say, ``Look, I think there has to be something here. Here’s what I think I see …″ and then begin talking back and forth. But you want to pick capable people. To use another analogy, you know the idea called the `critical mass.′ If you have enough stuff you have critical mass. There is also the idea I used to call `sound absorbers’. When you get too many sound absorbers, you give out an idea and they merely say, ``Yes, yes, yes.″ What you want to do is get that critical mass in action; ``Yes, that reminds me of so and so,″ or, ``Have you thought about that or this?″ When you talk to other people, you want to get rid of those sound absorbers who are nice people but merely say, ``Oh yes,″ and to find those who will stimulate you right back.
In other words, to be a good collaborator you have to contribute to the babble.
Is the following interpretation equivalent to the point?It can be systematically incorrect to “update on evidence.” What my brain experiences as “evidence” is actually “an approximation of the posterior.” Thus, the actual dog is [1% scary], but my prior says dogs are [99% scary], I experience the dog as [98% scary] which my brain rounds back to [99% scary]. And so I get more evidence that I am right.
I’m not totally convinced this is the right way to think about it, any given useful mutation will depend on some constant number of coordinates flipping, so in this high-dimensional space you’re talking about, useful mutations would look like affine subspaces of low codimension. When you project down to the relevant few dimensions, there’s probably more copies of virus than points to fit in, and it takes a long time for them to spread out.
I guess it depends on the geometry of the problem, whether there are a small number of relevant mutations that make a difference, each with a reasonable chance of being reached, or a huge number of relevant mutations each of which is hard to reach.
Adding onto this a little, here’s a toy model of viral genetic diversity based on my high-school level biology.
Suppose the virus’ DNA starts out as 000 (instead of ACTG for simplicity), and it needs to mutate into 111 to become stronger. Each individual reproduction event has some small probability p of flipping one of these bits. Some bit flips cause the virus to fail to function altogether, while others have no or negligible effect on the virus. As time goes on, the number of reproduction events starting from a given bitstring grows exponentially, so the likelihood of getting one more 1 grows exponentially as well. However, each time you jump from 000 to 100, it’s not as if all other copies of 000 turn into 100, so making the next jump takes a while of waiting on lots of copies of 100 to happen. And then some 101 appears, and there’s no jump for a while again as that strain populates.
The upshot is that you imagine the viral population to be “filling out the Hamming cube” one bitflip at a time and the weight of each bitstring is the total number of viruses with that code, and a genuinely new strain only appears when all 3 bits get flipped in some copy. But:
(a) The more total copies of the virus there is, the faster a bad mutation happens (speed scaling linearly).
(b) Assuming that some mutations require multiple independent errors to occur (which seems likely?), the virus population is “making incremental research progress” over time by spreading out across the genetic landscape towards different strains, even when no visibly different strains occur.
re: why are there more scary new strains now:
Have people have already accounted for the fact that the more virus there is in the world, the more likely it is for one of these viruses to mutate? If there’s 5x as many cases of covid floating around right now than in September, a strain as bad as the UK strain will emerge 5x as quickly in expectation.
This feels like an extremely important point. A huge number of arguments devolve into exactly this dynamic because each side only feels one of (the Rock|the Hard Place) as a viscerally real threat, while agreeing that the other is intellectually possible.
Figuring out that many, if not most, life decisions are “damned if you do, damned if you don’t” was an extremely important tool for me to let go of big, arbitrary psychological attachments which I initially developed out of fear of one nasty outcome.
I agree, but I was more asking about how you think your insight about the “distance to safety” can help with that.
Well, after a bounded number of initially difficult “far-out explorations” that cover the research landscape efficiently, the hope is that almost everything is reasonably close to safety henceforth.
Interesting. My own approach is usually to collaborate/ask someone who knows the subject you want to learn. But that does require being okay with asking stupid questions.
Yes, I think your approach is ideal for the efficiency of learning if anxiety was not a factor. Unfortunately the people who know the subjects I want to learn best are people I care about impressing and/or people so well-versed in the subject that they have difficulty bridging the inferential abyss between us. At least for me it is hard to treat them as a “psychologically nearby” companion who has my back.
Even after getting much better at asking stupid questions, it feels like the maximum I feel okay with asking in a meeting with someone who knows a subject already is ~3, and not ~40, which is the number I want to ask.
Very nice post! I would add that it is a useful and nontrivial skill to notice what you’re paying attention to. It may not be helpful to try getting curious unless you know concretely what this means about how you move your eyes and attention.
To give a video game example, players new to a genre have no idea where to put their eyes on the screen. When I told a friend playing Hades to put their eyes on their own character, instead of on the enemies, they instantly started taking half as much damage. I got a lot better at Dark Souls, on the other hand, by staring at the enemies to catch their telegraphed movements and not on myself. Similarly, I had a friend who could not get into Path of Exile because they wanted to dive into playing the game mechanically and was frustrated about my claim that to properly enjoy the game you spend most of your energy staring at skill trees and item builds and wikis. I found that my natural state playing PoE was leaving my eyes half unfocused on the game, spamming my skill rotation while thinking about my next item or skill upgrade on my second monitor.
To listening properly and be curious, I think the main places one should focus one’s attention (in addition to the words they’re saying) are: (a) on the other person’s face and body, (b) on the other person’s tone of voice, and (c) on your own bodily sensations. In other words, everywhere but your own thoughts.
To be clear, the papers would almost certainly have gone through anyway, the helpful thing was being very comfortable with Bayes rule and immediately noticing, for example, that conditioning on an event with probability 1-o(1) doesn’t influence anything by very much.
Another trick I derived from this comfort is to almost never actually condition on small-probability events. Instead, the better thing to do is to modify the random variables you care about to fail catastrophically in the small probability scenario.
For example, in graph theory I might care about controlling a random variable X which is the number of times a certain substructure appears in the random graph G(n,p), but to do so I need to condition away some tail event E like the appearance of a vertex of extremely high degree. Instead of working with conditional probability for the rest of the argument (which might go on to condition away 3 or 4 other tail events), the nicer thing to do is to modify X into a variable X’ which is defined to be 0 when E occurs, and reason about X’ instead. This is better for multiple reasons; the most important one being that the edge appearances in G(n,p) are no longer independent when you condition on E complement.
I think mostly what I got out of the Sequences was removing an air of mystery around Bayes rule. Here by mystery I mean “System 1 mystery,” i.e. that before I read the Sequences, to figure out a conditional probability I would have to sit down and carefully multiply and divide. This post also helped.
How do you think this apply to intellectual pursuits? I have in mind research advising: in my experience, some people that I think could be great researchers are terrified of exploring some part of knowledge where there is no answer yet. And even we established researchers can easily be afraid of learning a new subject or a new technique that would help them tremendously. Maybe the comfort flags should be links with stuff that the graduate student/researcher knows well? Anecdotally, people seem more open to learning about what you want to say if you link it to their own field.
I don’t pretend to be an established researcher, but here is what I had in mind. Most researchers at one point or other spend some amount of time white-knuckle learning things that are outside their comfort zones, but usually these things are just barely outside. My suggestion would be that all other things equal, some of that time should be spent learning things really far out instead.
Also I think learning in pairs is a very helpful tool. The active ingredient is to have someone you trust enough to freely share your ignorance and ask basic questions, and the easiest way to get this trust is to find someone who is also obviously ignorant of the same thing.
I wonder if the following are also examples of motive ambiguity:
Mothers choosing to stay at home.
Researchers choosing to be bad at teaching.
Mathematicians choosing to work on problems with no applications.
Let me share some more gears/evidence. I believe something a little more interesting happens than what you’re saying (which is definitely one piece of the puzzle).
(1) It’s fun to look at how the audience organizes itself during math talks. The faculty almost always sit in the front row, point out mistakes more directly (“You mean this” instead of “Is this correct?”), ask questions more often (and with less hand-raising), and sometimes even feel comfortable to answer questions in the speaker’s stead. I suspect this is a social role that everyone learns through attending enough seminars.
(2) Faculty have access to a lot more privileged information about other mathematicians than everyone else. They are on editorial boards, hiring committees, admissions committees, conference organization, awards panels, etc. I got a confidence boost after peer reviewing my first couple of papers, the transition to faculty is this x10 in terms of data to train on and notice you’re being underconfident.
(3) Professors spend a lot of time with their research groups/PhD students/undergrads compared to in the company of other faculty, so they aren’t doing as much comparing themselves with other faculty as you would think. At least in mathematics, it’s generally preferred for faculty at the same university to have research interests as far as part as possible (to cover a breadth of fields), so each professor interacts a great deal on the day-to-day with their group of undergrads/grad students/postdocs. Meetings with other faculty are mostly logistical, with the possible exception of a handful of close collaborators. This is probably even more true in fields where a professor is literally a head of their own lab and the PI for all research that happens in the lab. I think status feelings tend to work on the level of “people you interact with most on a daily basis” instead of “people you intellectually compare yourself to.”
Fascinating! Definitely plan to check this out, thanks for the recommendations and detailed introduction.
Thank you for writing this, it led me to reconsider this phenomenon from a different perspective and revisit Lsusr’s post as well as competent elites, which seemed to really string things together for me.
Lsusr is primarily talking about success “outside of the usual system”, which generally frees someone up even more from the usual system. Start-ups are the primary example of this.Alkjash is primarily talking about success within the existing system. The stereotypical successful career is an example of this.
Lsusr is primarily talking about success “outside of the usual system”, which generally frees someone up even more from the usual system. Start-ups are the primary example of this.
Alkjash is primarily talking about success within the existing system. The stereotypical successful career is an example of this.
This definitely feels like part of the thing, but I would (as with many things) phrase it in the language of status. I claim that much of the “freedom” that Lsusr talks about and the “intelligence and aliveness” Eliezer talks about is consequences of feeling high status. In academia, the standard solution to all of the ennui and anxious underconfidence a grad student or postdoc feels is … wait for it … tenure. Your inhibitions magically disappear when you become faculty, and mathematicians often become confident to explore, gregarious, and willing to state beliefs even in dimensions orthogonal to their expertise (e.g. Terry Tao on Trump). This is explained by direct changes in the brain, as well as external changes in how the intelligent social web coopts your cognition, when a person gains status.
My guess is that the difference between what you call Lsusr’s “outside of the usual system” and my “within the existing system” is the difference between systems with shorter and looser status hierarchies and those with longer and tighter ones. In the former it is easier for an exceptional individual to quickly gain competence and reputation and reap the benefits of status. This difference is in turn mostly explained by systems having different levels of play. Thus, one would find success more agency-limiting for a longer period of time in professional Go than in professional Starcraft, in mathematics than in AI/ML, in Google than in a startup, etc.
My interpretation of Lsusr’s philosophy is that there is a magic sauce that rhymes with arrogance which allows one to turn on powerful high-status feelings and behaviors (confidence, agency, vision) regardless of circumstance. Unfortunately there are harsh cultural defenses against this kind of thing that one has to prepare for.
Very interesting! This thread is the first time I’ve heard of NLP (might have seen the acronym before but I thought it was ML people referring to Natural Language Processing), I will definitely check it out. I guess I just rounded off my observations to the nearest things I recognized. I’m not surprised that Robbins stuff is embedded in a larger technique but am kind of surprised that I’ve been ignorant of it for so long.
Is there a book or resource that you would most recommend to learn NLP?
The phenomenon I’m pointing to via “making couples fall in love with him” (which might be the wrong words) is that in relationship interventions he uses a combination of explicit models, personal charisma, and hard-to-transfer people-reading skills to make each person feel understood at a level that causes them to trust him deeply. This level of trust seems pretty extraordinary and hard to distinguish from love. After that he proceeds to use exercises to transfer this trust to between the couple in a way that requires very little agency on the couples’ part. They sort of just go along with/paraphrase TR’s statements to each other and then get this massive intimacy boost. I would guess that they come out of the experience feeling as positive or more so about TR than about each other.
I would love to hear which pieces of his written work you think of as “actually new or useful insights,” the only thing that fits that description for me from his youtube videos is the Six Human Needs, which is a useful template for goal-factoring for me.
Like Asimov is back among the living.
No worries! Perhaps it’s worth reminding everyone here that asymmetric justice incentivizes inaction. I hope I didn’t do this just now, I very much appreciate the spirit of your experiment and encourage more people to try to state their beliefs and move fast and break things.