I am Issa Rice. https://issarice.com/
Messaging sounds good to start with (I find calls exhausting so only want to do it when I feel it adds a lot of value).
Ah ok cool. I’ve been doing something similar for the past few years and this post is somewhat similar to the approach I’ve been using for reviewing math, so I was curious how it was working out for you.
Have you actually tried this approach, and if so for how long and how has it worked?
So there’s a need for an intermediate stage between creating an extract and creating a flashcard. This need is what progressive highlighting seeks to address.
I haven’t actually done incremental reading in SuperMemo so I’m not sure about this, but I believe extract processing is meant to be recursive: first you extract a larger portion of the text that seems relevant, then when you encounter it again the extract itself is treated like an original article itself, so you might extract just a single sentence, then when you encounter that sentence again you might make a cloze deletion or Q&A card.
This sounds a lot like (a subset of) incremental reading. Instead of highlighting, one creates “extracts” and reviews those extracts over time to see if any of them can be turned into flashcards. As you suggest, there is no pressure to immediately turn things into flashcards on a first-pass of the reading material. These two articles about incremental reading emphasize this point. A quote from the first of these:
Initially, you make extracts because “Well it seems important”. Yet to what degree (the number of clozes/Q&As) and in what formats (cloze/Q&A/both) are mostly fuzzy at this point. You can’t decide wisely on what to do with an extract because you lack the clarity and relevant information to determine it. In other words, you don’t know the extract (or in general, the whole article) well enough to know what to do with it.In this case, if you immediately process an extract, you’ll tend to make mistakes. For example, for an extract, you should have dismissed it but you made two clozed items instead; you may have dismissed it when it’s actually very important to you, unbeknown to you at that moment. With lowered quality of metamemory judgments, skewed by all the cognitive biases, the resulting clozed/Q&A item(s) is just far from optimal.
Initially, you make extracts because “Well it seems important”. Yet to what degree (the number of clozes/Q&As) and in what formats (cloze/Q&A/both) are mostly fuzzy at this point. You can’t decide wisely on what to do with an extract because you lack the clarity and relevant information to determine it. In other words, you don’t know the extract (or in general, the whole article) well enough to know what to do with it.
In this case, if you immediately process an extract, you’ll tend to make mistakes. For example, for an extract, you should have dismissed it but you made two clozed items instead; you may have dismissed it when it’s actually very important to you, unbeknown to you at that moment. With lowered quality of metamemory judgments, skewed by all the cognitive biases, the resulting clozed/Q&A item(s) is just far from optimal.
Does life extension (without other technological progress to make the world in general safer) lead to more cautious life styles? The longer the expected years left, the more value there is in just staying alive compared to taking risks. Since death would mean missing out on all the positive experiences for the rest of one’s life, I think an expected value calculation would show that even a small risk is not worth taking. Does this mean all risks that don’t get magically fixed due to life extension (for example, activities like riding a motorcycle or driving on the highway seem risky even if we have life extension technology) are not worth taking? (There is the obvious exception where if one knows when one is going to die, then one can take more risks just like in a pre-life extension world as one reaches the end of one’s life.)
I haven’t thought about this much, and wouldn’t be surprised if I am making a silly error (in which case, I would appreciate having it pointed out to me!).
I like this tag! I think the current version of the page is missing the insight that influence gained via asymmetric weapons/institutions is restricted/inflexible, i.e. an asymmetric weapon not only helps out only the “good guys” but also constrains the “good guys” into only being able to do “good things”. See this comment by Carl Shulman. (I might eventually come back to edit this in, but I don’t have the time right now.)
The EA Forum wiki has stubs for a bunch of people, including a somewhat detailed article on Carl Shulman. I wonder if you feel similarly unexcited about the articles there (if so, it seems good to discuss this with people working on the EA wiki as well), or if you have different policies for the two wikis.
I also just encountered Flashcards for your soul.
Ah ok, that makes sense. Thanks for clarifying!
It seems to already be on LW.
Edit: oops, looks like the essay was posted on LW in response to this comment.
I’m unable to apply this tag to posts (this tag doesn’t show up when I search to add a tag).
For people who find this post in the future, Abram discussed several of the points in the bullet-point list above in Probability vs Likelihood.
Regarding base-rate neglect, I’ve noticed that in some situations my mind seems to automatically do the correct thing. For example if a car alarm or fire alarm goes off, I don’t think “someone is stealing the car” or “there’s a fire”. L(theft|alarm) is high, but P(theft|alarm) is low, and my mind seems to naturally know this difference. So I suspect something more is going on here than just confusing probability and likelihood, though that may be part of the answer.
I understood all of the other examples, but this one confused me:
A scenario is likely if it explains the data well. For example, many conspiracy theories are very likely because they have an answer for every question: a powerful group is conspiring to cover up the truth, meaning that the evidence we see is exactly what they’d want us to see.
If the conspiracy theory really was very likely, then we should be updating on this to have a higher posterior probability on the conspiracy theory. But in almost all cases we don’t actually believe the conspiracy theory is any more likely than we started out with. I think what’s actually going on is the thing Eliezer talked about in Technical Explanation where the conspiracy theory originally has the probability mass very spread out across different outcomes, but then as soon as it learns the actual outcome, it retroactively concentrates the probability mass on that outcome. So I want to say that the conspiracy theory is both unlikely (because it did not make an advance prediction) and improbable (very low prior combined with the unlikeliness). I’m curious if you agree with that or if I’ve misunderstood the example somehow.
Thanks, I like your rewrite and will post questions instead in the future.
I think I understand your concerns and agree with most of it. One thing that does still feel “off” to me is: given that there seems to be a lot of in-person-only discussions about “cutting edge” ideas and “inside scoop” like things (that trickle out via venues like Twitter and random Facebook threads, and only much later get written up as blog posts), how can people who primarily interact with the community online (such as me) keep up with this? I don’t want to have to pay attention to everything that’s out there on Twitter or Facebook, and would like a short document that gets to the point and links out to other things if I feel curious. (I’m willing to grant that my emotional experience might be rare, and that the typical user would instead feel alienated in just the way you describe.)
The closest thing I’ve seen is Unusual applications of spaced repetition memory systems.
For those reading this thread in the future, Alex has now adopted a more structured approach to reviewing the math he has learned.
Thanks, that worked and I was able to fix the rest of the images.
I just tried doing this in a post, and while the images look fine in the editor, they come out huge once the post is published. Any ideas on what I can do to fix this? (I don’t see any option in the editor to resize the images, and I’m scared of converting the post to markdown.)