You Get About Five Words
Cross posted from the EA Forum.
Epistemic Status: all numbers are made up and/or sketchily sourced. Post errs on the side of simplistic poetry – take seriously but not literally.
If you want to coordinate with one person on a thing about something nuanced, you can spend as much time as you want talking to them – answering questions in realtime, addressing confusions as you notice them. You can trust them to go off and attempt complex tasks without as much oversight, and you can decide to change your collective plans quickly and nimbly.
You probably speak at around 100 words per minute. That’s 6,000 words per hour. If you talk for 3 hours a day, every workday for a year, you can communicate 4.3 million words worth of nuance.
You can have a real conversation with up to 4 people.
(Last year the small organization I work at considered hiring a 5th person. It turned out to be very costly and we decided to wait, and I think the reasons were related to this phenomenon)
If you want to coordinate on something nuanced with, say, 10 people, you realistically can ask them to read a couple books worth of words. A book is maybe 50,000 words, so you have maybe 200,000 words worth of nuance.
Alternately, you can monologue at people, scaling a conversation past the point where people realistically can ask questions. Either way, you need to hope that your books or your monologues happen to address the particular confusions your 10 teammates have.
If you want to coordinate with 100 people, you can ask them to read a few books, but chances are they won’t. They might all read a few books worth of stuff, but they won’t all have read the same books. The information that they can be coordinated around is more like “several blogposts.” If you’re trying to coordinate nerds, maybe those blogposts add up to one book because nerds like to read.
If you want to coordinate 1,000 people… you realistically get one blogpost, or maybe one blogpost worth of jargon that’s hopefully self-explanatory enough to be useful.
If you want to coordinate thousands of people...
You have about five words.
This has ramifications on how complicated a coordinated effort you can attempt.
What if you need all that nuance and to coordinate thousands of people? What would it look like if the world was filled with complicated problems that required lots of people to solve?
I guess it’d look like this one.
- Epistemic Legibility by 9 Feb 2022 18:10 UTC; 285 points) (
- Covid 12/24: We’re F***ed, It’s Over by 24 Dec 2020 15:10 UTC; 276 points) (
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by 24 Sep 2019 4:12 UTC; 260 points) (
- “Carefully Bootstrapped Alignment” is organizationally hard by 17 Mar 2023 18:00 UTC; 201 points) (
- The Relationship Between the Village and the Mission by 12 May 2019 21:09 UTC; 128 points) (
- Dan Luu on “You can only communicate one top priority” by 18 Mar 2023 18:55 UTC; 117 points) (
- Interpretability/Tool-ness/Alignment/Corrigibility are not Composable by 8 Aug 2022 18:05 UTC; 115 points) (
- Avoid Unnecessarily Political Examples by 11 Jan 2021 5:41 UTC; 105 points) (
- Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by 29 Jun 2020 2:45 UTC; 101 points) (
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- Taking Initial Viral Load Seriously by 1 Apr 2020 10:50 UTC; 93 points) (
- Partial summary of debate with Benquo and Jessicata [pt 1] by 14 Aug 2019 20:02 UTC; 87 points) (
- Covid 11/19: Don’t Do Stupid Things by 19 Nov 2020 16:00 UTC; 72 points) (
- Robust Agency for People and Organizations by 19 Jul 2019 1:18 UTC; 59 points) (
- Covid 12/10: Vaccine Approval Day in America by 10 Dec 2020 14:20 UTC; 58 points) (
- Epistemic Legibility by 21 Mar 2022 19:18 UTC; 53 points) (EA Forum;
- Schelling Categories, and Simple Membership Tests by 26 Aug 2019 2:43 UTC; 51 points) (
- Power vs Precision by 16 Aug 2021 18:34 UTC; 48 points) (
- You get one story detail by 5 Apr 2022 4:38 UTC; 35 points) (
- 15 Jul 2019 21:24 UTC; 25 points)'s comment on “Rationalizing” and “Sitting Bolt Upright in Alarm.” by (
- 16 Mar 2022 18:43 UTC; 25 points)'s comment on Book Launch: The Engines of Cognition by (
- 29 Oct 2021 0:18 UTC; 18 points)'s comment on Ruling Out Everything Else by (
- 12 Oct 2021 16:02 UTC; 18 points)'s comment on Blood Is Thicker Than Water 🐬 by (
- Thoughts on Moral Philosophy by 17 Aug 2021 12:57 UTC; 17 points) (
- 29 May 2019 16:59 UTC; 16 points)'s comment on Drowning children are rare by (
- 8 Dec 2019 0:04 UTC; 16 points)'s comment on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases by (
- 30 Jan 2021 1:18 UTC; 14 points)'s comment on The GameStop Situation: Simplified by (
- 29 May 2019 5:06 UTC; 14 points)'s comment on Drowning children are rare by (
- 13 Mar 2019 15:47 UTC; 13 points)'s comment on “AlphaStar: Mastering the Real-Time Strategy Game StarCraft II”, DeepMind [won 10 of 11 games against human pros] by (
- 21 Dec 2021 13:33 UTC; 7 points)'s comment on High School Seniors React to 80k Advice by (EA Forum;
- 9 Dec 2020 5:04 UTC; 6 points)'s comment on alkjash’s Shortform by (
- 1 Feb 2023 7:36 UTC; 6 points)'s comment on Aiming for Convergence Is Like Discouraging Betting by (
- 30 Dec 2020 3:59 UTC; 5 points)'s comment on Review Voting Thread by (
- 25 Sep 2021 18:52 UTC; 4 points)'s comment on Shared Frames Are Capital Investments in Coordination by (
- 15 Jan 2021 1:03 UTC; 4 points)'s comment on Covid 1/14: To Launch a Thousand Shipments by (
- 2 Sep 2021 17:15 UTC; 3 points)'s comment on What 2026 looks like by (
- 1 Aug 2019 19:14 UTC; 2 points)'s comment on jacobjacob’s Shortform Feed by (
- 30 Jun 2020 17:32 UTC; 2 points)'s comment on Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by (
- 5 May 2022 18:24 UTC; 2 points)'s comment on Narrative Syncing by (
- 15 Aug 2019 21:27 UTC; 2 points)'s comment on Raemon’s Shortform by (
I use this concept often, including explicitly thinking about what (about) five words I want to be the takeaway or that would deliver the payload, or that I expect to be the takeaway from something. I also think I’ve linked to it quite a few times.
I’ve also used it to remind people that what they are doing won’t work because they’re trying to communicate too much content through a medium that does not allow it.
A central problem is how to create building blocks that have a lot more than five words, but where the five words in each block can do a reasonable substitute job when needed.
As an additional data point, a link to this post will appear in the 12⁄10 Covid weekly roundup.
This is pretty cool. Can you give some example of about five word takeaways you’ve created for different contexts?
Here are some attempted takeaways for things I’ve written, some of which were explicit at the time, some of which were implicit:
Covid-19: “Outside, social distance, wear mask.”
Simulacra (for different posts/models): “Truth, lies, signals, strategic moves” or “level manipulates/dominates level below” or “abstractions dominate, then system collapses”
Mazes: “Modern large organizations are toxic” or “middle management destroys your soul”
Asymmetric Justice: “Unintentional harms count, benefits don’t” or “Counting only harms destroys action” or similar.
Or one can notice that we are abstracting out a conclusion from someone else’s thing, or think about what we hope another will take away. Often but not always it’s the title. Constantly look to improve. Pain not unit of effort. Interacting with system creates blameworthiness. Default AI destroys all value. Claim bailey, retreat to motte. Society stuck in bad equilibrium. Etc.
I’ve found this valuable to keep in mind.