On Doing the Improbable
(Cross-posted from Facebook.)
I’ve noticed that, by my standards and on an Eliezeromorphic metric, most people seem to require catastrophically high levels of faith in what they’re doing in order to stick to it. By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty, without assigning odds in the vicinity of 10x what I started out assigning that the project would work. And this is not a kind of estimate you can get via good epistemology.
I mean, you can legit estimate 100x higher odds of success than the Modest and the Outside Viewers think you can possibly assign to “writing the most popular HP fanfiction on the planet out of a million contenders on your first try at published long-form fiction or Harry Potter, using a theme of Harry being a rationalist despite there being no evidence of demand for this” blah blah et Modest cetera. Because in fact Modesty flat-out doesn’t work as metacognition. You might as well be reading sheep entrails in whatever way supports your sense of social licensing to accomplish things.
But you can’t get numbers in the range of what I estimate to be something like 70% as the required threshold before people will carry on through bad times. “It might not work” is enough to force them to make a great effort to continue past that 30% failure probability. It’s not good decision theory but it seems to be how people actually work on group projects where they are not personally madly driven to accomplish the thing.
I don’t want to have to artificially cheerlead people every time I want to cooperate in a serious, real, extended shot at accomplishing something. Has anyone ever solved this organizational problem by other means than (a) bad epistemology (b) amazing primate charisma?
EDIT: Guy Srinivasan reminds us that paying people a lot of money to work on an interesting problem is also standardly known to help in producing real perseverance.
- Politics is way too meta by 17 Mar 2021 7:04 UTC; 288 points) (
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- Narrative Syncing by 1 May 2022 1:48 UTC; 118 points) (
- Four factors that moderate the intensity of emotions by 24 Nov 2018 20:40 UTC; 59 points) (
- Politics is far too meta by 17 Mar 2021 23:57 UTC; 58 points) (EA Forum;
- “You can’t possibly succeed without [My Pet Issue]” by 19 Dec 2019 1:12 UTC; 53 points) (
- Informal Post on Motivation by 23 Feb 2019 23:35 UTC; 29 points) (
- 1 Dec 2021 20:24 UTC; 23 points) 's comment on Christiano, Cotra, and Yudkowsky on AI progress by (
- Bundle your Experiments by 18 Jan 2019 23:22 UTC; 19 points) (
- 19 Mar 2021 1:48 UTC; 14 points) 's comment on Politics is far too meta by (EA Forum;
- 24 Sep 2019 17:58 UTC; 9 points) 's comment on Ruby’s Quick takes by (EA Forum;
- 28 May 2022 22:33 UTC; 9 points) 's comment on MIRI announces new “Death With Dignity” strategy by (
I’m a bit surprised that most of the previous discussion here was focused on the “okay, so how do you actually motivate people?” aspect of this post.
This post gave me a fairly strong “sit bolt upright in alarm” experience because of it’s implications on epistemics, and I think those implications are sneaky and far reaching. I expect this phenomenon to influence people’s ability to think and communicate, before you get to the point where you actually have a project that people are hitting the hard parts of.
People form models of what sort of things they can achieve, and what sort of projects to start, and how likely their friends are to succeed at things, and (I expect) this backpropagates through their entire information system.
The problem may not just affect the people on a given project – it could affect the people nearby (and the people nearby them in turn), in terms of what sort of feedback you can easily give each other.
I’ve struggled with deciding whether to give people feedback that I don’t think their project is likely to work, when I nonetheless think the project is the right thing to be working on, either for EV reasons or longterm-growth-as-a-person reasons, i.e. their next project will benefit from the skills they’re gaining here. I know there are some people who’d lean hard into the “information is a key bottleneck, definitely don’t withhold info like that.” And they may be right. But if so, that still leaves us with a deep, crucial question of how to integrate epistemics and actual success on hard, long problems that lets you actually win at the instrumental part.
(I don’t think the “pay people a lot” is actually sufficient, although it obviously helps. I think lack-of-clear-path-to-victory can be demoralizing even if you have plenty of money – that seems to be where a lot of 20th century ennui comes from. Also, many of the key projects here are early-stage where it’s just hard to convince people to give you enough money to escape scarcity mindset)
I notice I don’t have a very explicit model of what’s going on – this comment felt more like I was ranting than clearly explaining things, not sure how it comes across to others.
I hope to flesh this out (and hopefully make some empirical predictions) during the review phase.
This post has influenced my evaluations of what I am doing in practice by forcing me to consider lowering the bar for expected success for high return activities. Despite “knowing” about how to shut up and multiply, and needing to expect a high failure rate if taking reasonable levels of risk, I didn’t consciously place enough weight on these. This helped move me more in that direction, which has led to both an increased number of failures to get what I hoped, and a number of mostly unexpected successes when applying for / requesting / attempting things.
It is worth noting that I still need to work on the reaction I have to failing at these low cost, high-risk activities. I sometimes have a significant emotional reaction to failing, which is especially problematic because the emotional reaction to failing at a long-shot can influence my mood for multiple days or weeks afterwards.
The core idea of this post has occurred to me more than once when considering plans. I’m still working on how to relate to plans with low chances of success. On the one hand low chances of success suggest a bad plan, and being willing to “do the improbable” feels like it’s an excuse for having a bad plan. On the other, sometimes you really want to be doing low probability, high EV plans. I’m uncertain over whether LW counts as this. Sometimes I think we can definitely succeed at big stuff, sometimes it more seems like high-EV low probability. I’m not sure.
But all in all, the idea here seems important to think more about. I’d actually like to see more thought on this (and perhaps do my own stuff here).
Also want to note that Anna’s post on “going at half-speed” seems at least a bit related.