I like that this post took a very messy, complicated subject, and picked one facet of to gain a really crisp understanding of. (MIRI’s 2018 Research Direction update goes into some thoughts on why you might want to become deconfused on a subject, and the Rocket Alignment Problem is a somewhat more narrativized version)
I personally suspect that the principles Zack points to here aren’t the primary principles at play for why epistemic factions form. But, it is interesting to explore that even when you strip away tons of messy-human-specific-cognition (i.e. propensity for tribal loyalty for ingroup protection reasons), a very simple model of purely epistemic agents may still form factions.
I also really liked Zack lays out his reasoning very clearly, with coding steps that you can follow along with. I should admit that I haven’t followed along all the way through (I got about a third of the way through before realizing I’d need to set aside more time to really process it). So, this curation is not an endorsement that all his coding checks out. The bar for Curated is, unfortunately, not the bar for Peer Review. (But! Later on when we get to the 2020 LessWrong Review, I’d want this sort of thing checked more thoroughly).
It is still relatively uncommon on LessWrong for someone to even rise to the bar of “clearly lays out their reasoning in a very checkable way”, and when someone does that while making a point that seems interesting and important-if-true, it seems good to curate it.
I had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!).
Disclaimer: I am not making much effort to not ramble in this post.
A couple takeaways:
1. Working Memory Limits
One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people’s working memory (where human working memory limits is roughly “4-7 chunks”).
It’s sort of embarrassing that I didn’t concretely think about this before, because I’ve spent the past year SPECIFICALLY thinking about working memory limits, and how they are the key bottleneck on intellectual progress.
So, one new habit I have is “whenever I’ve introduced more than 6 points to keep track of, stop and and figure out how to condense the working tree of points down to <4.
(Ideally, I also keep track of this in advance and word things more simply, or give better signposting for what overall point I’m going to make, or why I’m talking about the things I’m talking about)
2. I just don’t finish sente
I frequently don’t finish sentences, whether in person voice or in text (like emails). I’ve known this for awhile, although I kinda forgot recently. I switch abruptly to a new sentence when I realize the current sentence isn’t going to accomplish the thing I want, and I have a Much Shinier Sentence Over Here that seems much more promising.
But, people don’t understand why I’m making the leap from one half-finished thought to another.
So, another simple habit is “make sure to finish my god damn sentences, even if I become disappointed in them halfway through”
3. Use Mindful Cognition Tuning to train on *what is easy for people to follow*, as well as to improve the creativity/usefulness of my thoughts.
I’ve always been rambly. But a thing that I think has made me EVEN MORE rambly in the past 2 years is a mindful-thinking-technique, where you notice all of your thoughts on the less-than-a-second level, so that you can notice which thought patterns are useful or anti-useful.
This has been really powerful for improving my thought-quality. I’m fairly confident that I’ve become a better programmer and better thinker because of it.
But, it introduces even more meta-thoughts for me to notice while I’m articulating a sentence, which distract me from the sentence itself.
What I realized last weekend was: I can use Mindful Cognition to notice what types of thoughts/sentences are useful for *other people’s comprehension of me*, not just how useful m original thought processes are.
The whole point of the technique is to improve your feedback loop (both speed and awareness), which makes it easier to deliberate practice. I think if I just apply that towards Being More Comprehensible, it’ll change from being a liability in rambliness to an asset.
I think I might have phrased the OP “hey, is there a reason to use Foretold or Metaculus over Prediction Book?”, and it sounds in both cases like they’re really optimized for a different thing.
That makes sense. Thanks for chiming in.
(serious question, I’m not sure what the right process here is)
What do you think should happen instead of “read through and object to Wei_Dai’s existing blogposts?”. Is there a different process that would work better? Or you think this generally isn’t worth the time? Or you think Wei Dai should write a blogpost that more clearly passes your “sniff test” of “probably compelling enough to be worth more of my attention?”
I have now created a Philosophy of Language tag. I haven’t yet created a “disagreement” tag because it feels like it could use a more precise name. “Philosophy of Disagreement” is… okay but not great.
This was new information to me, thanks.
I like this post for a few reasons, most of which have been covered by other commenters. The object-level topic is a bit atypical for LessWrong, but
a) I think it ends up touching on some key meta-level LW topics, such as gears-level-understanding, and learning how to learn.
b) as Romeo notes, I think it’d be a bit better to have at least some more posts like this on the margin, even if at face value they don’t obviously end up connecting to the “obvious” LessWrong paradigm topics.
The post itself is well written and make it’s points quite clearly.
On a personal level, I also think it touches upon topics I wrote about several years ago, and I think makes the points I was trying to make much more clearly.
Agreed with the status/feelings cause. And I’m not 100% sure the solution is “prevent people from doing the thing they instinctively want to do” (especially “all the time.”)
My current guess is “let people crowd around the charismatic/and/or/interesting people, but treat it more like a panel discussion or fireside chat, like you might have at a conference, where mostly 2-3 people are talking and everyone else is more formally ‘audience.’”
But doing that all the time would also be kinda bad in different ways.
In this case… you might actually be able to fix this with technology? Can you literally put room-caps on the rooms, so if someone wants to be the 4th or 6th person in a room they… just… can’t?
Put another way: insofar as you’re defining “Ritual” as “blackbox process you don’t really understand”, it’s probably true for most rituals that you’d be better off if you understood the underlying process. The question is how often it’s worth paying the upfront cost. You can’t do it for literally every process you use.
A’ight. Engaging with the post a bit more than my previous comment (note: haven’t yet read the whole thing, just the first half).
I have some kind of aversive reaction to the claim:
The right way to approach baking is to realize it is not a ritual. Instead, try to understand the principles of how baking works, to understand why an ingredient is in the recipe, and why a particular step is needed
I certainly agree that you’ll gain a lot of benefits if you approach baking this way. But, like, sometimes I just don’t have the time/energy/investment to fully understand a process, and just want a blackbox procedure that mostly works. And, sometimes I try to fudge that procedure and then want to complain about it a bit.
(I think your overall point stands, and in other circumstances I might have been the person arguing that “yeah you really should understand the underlying process here.”)
Man, I came here much more excited about the prospect for someone making the case “Baking is not a symbolic act that transforms you via meaning you ascribe to it”, which, well, sounded (usually) true, but hinted at some kind of very interesting worldview disagreement somewhere.
(This is mostly a joke. Post seems pretty straightforward)
(search function has been updated to include tags. Also, when you hit the return key while searching you go to the new Search Page, which is a bit larger and easier to work with)
Heh, I do often find spreadsheets to work the best, even if they’re a bit janky/ugly, because I can customize them to be exactly what I want.
But it actually looks like PredictionBook may be superior to a spreadsheet (for me at least), by virtue of being pretty simple to enter data, as well as automatically composing your “correct predictions” graph, and sending you reminder emails when the prediction is due to resolve.
I did just check if PredictionBook could set all predictions to “private” instead of me having to change the setting every time, and the answer is yes, and also it looks like the UI has a few other nice-to-haves that actually make “low friction prediction” achievable.
I think I might need to create a custom stylish overlay for the page so to clear away some excess clutter, so it feels a bit less overwhelming to use. But, that’s a fairly simple UI shift and one that I can create for myself. So PredictionBook might just be a good solution.
Is there an option for foretold to become Very Low Friction somehow? I agree with the “5 second level predictions” thing being a key issue.
My point isn’t “who cares about emotional safety, let them filter themselves out if they can’t handle the truth [as I see it]”, but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can’t regulate at all, and therefore is free to wander without correction.
Thanks, this was a good neat point that gives me a conceptual handle for thinking about the overall problem.
Wow. When you gave the first example, my thought was “huh, game is doing typical deontological bullcrap as games often do.” By the time you got to the end, I thought “wow, the people who designed this game were very attentive and thoughtful, I’m super impressed.”
There are tools to give old posts new frontpage life, which I’d be happy to use here (you can send me a PM about it when you’re ready). But, if you want to go the sequence route instead:
We deliberately make it less obvious to new users how to create sequences (users with 1000+ karma see an obvious button in the user menu). If you go to /library page, you’ll find a Create Sequence button.
So if you want to go the sequence route, I’d just create new posts from scratch, one a time, spaced out a couple days apart. (You’ll get more engagement this way. I cry a little inside when I see users write magnum opuses that they create nicely formatted sequences for… and then post all at once, which is overwhelming and people don’t read)
Relatedly, I’d crosspost old content over at a rate of around 1-per-2-days, and check to see which sort of content gets engagement/upvotes/comments.
PredictionBook is basically my BATNA here. Seems better than homebrewing something. But I wanted to check if there were any better options that had come out in the past few years.