I operate by Crocker’s rules.
niplav
High Status Eschews Quantification of Performance
[Question] What is Going On With CFAR?
I only flossed on the right side of my mouth since 2023-07-08, and today asked the dentist to guess which side I’d flossed on. She guessed left.
Have Attention Spans Been Declining?
Subscripts for Probabilities
Ah, but there is some non-empirical cognitive work done here that is really relevant, namely the choice of what equivalence class to put Bernie Bankman into when trying to forecast. In the dialogue, the empiricists use the equivalence class of Bankman in the past, while you propose using the equivalence class of all people that have offered apparently-very-lucrative deals.
And this choice is in general non-trivial, and requires abstractions and/or theory. (And the dismissal of this choice as trivial is my biggest gripe with folk-frequentism—what counts as a sample, and what doesn’t?)
Transfer Learning in Humans
Self-Blinded L-Theanine RCT
Properties of Good Textbooks
Range and Forecasting Accuracy
Cryonics Cost-Benefit Analysis
Further evidence that I should write a factpost investigating whether attention spans have been declining.
Self-Blinded Caffeine RCT
I think the words “optimism” and “pessimism” are really confusing, because they conflate the probability, utility and steam of things:
You can be “optimistic” if you believe a good event is likely (or a bad one unlikely), you can be optimistic because you believe a future event (maybe even unlikely) is good, or you have a plan or idea or stance for which you have a high recursive self-trust/recursive reflectively stable prediction that you will engage in it.
So you could be “pessimistic” in the sense that extinction due to AI is unlikely (say, <1%) but you find it super bad and you currently don’t have anything concrete that you can latch onto to decrease it.
Or (in the case of e.g. MIRI) you might have (“indefinitely optimistic”?) steam for reducing AI risk, find it moderately to extremely likely, and think it’s going to be super bad.
Or you might think that extinction would be super bad, and believe it’s unlikely (as Belrose and Pope do) and have steam for both AI and AI alignment.
But the terms are apparently confusing to many people, and I think using these terminologies can “leak” optimism or pessimism from one category into another, and can lead to worse decisions and incorrect beliefs.
- 5 Dec 2023 14:30 UTC; 4 points) 's comment on Steam by (
Please Bet On My Quantified Self Decision Markets
Brain-Computer Interfaces and AI Alignment
Iqisa: A Library For Handling Forecasting Datasets
Oh nice, another post I don’t need to write anymore :-D
Some disjointed thoughts on this I had:
Feedback loops can be characterized along at least three axes:
Speed: How quickly you get feedback from actions you take. Archery has a very fast feedback loop: You shoot an arrow and one or two seconds later you see what the outcome is.
Noise: How noisy the feedback is. High-frequency trading has fast feedback loops, but they have a lot of noise, and finding the signal is the difficult part.
Richness: How much information you’re getting. Dating is one example: Online dating has extremely poor feedback loops: only a couple of bits (did the other person respond, what did they respond) per interaction, while talking & flirting with people in person has extremely rich feedback (the entire visual+acustic+sensory field (plus perhaps smell? Don’t know much about human pheromones))—probably kilobytes per minimal motor-action, and megabytes per second.
Fast & low-noise & rich feedback loops are the best, and improving the feedback loop in any of those dimensions is super valuable.
As an example, forecasting has meh feedback loops: they can be very slow (days at least, but more likely months or years (!)), the feedback is kind of poor (only a few bits per forecast), but at least there’s not that much noise (you forecast what the question says, but maybe this is why forecasters really don’t like questions resolving on technicalities—the closest thing to noise).
But one can improve the richness of the forecasting feedback loop by writing out ones reasoning, so one can update on the entire chain of thought once the resolution comes. Similarly, programming has much better feedback loops than mathematics, which is why I’d recommend that someone learn programming before math (in general learn things with fast & rich feedback loops earlier and slow & poor ones later).
Also, feedback loops feel to me like they’re in the neighbourhood of both flow & addiction? Maybe flow is a feedback loop with a constant or increasing gradient, while addiction is a feedback loop with a decreasing gradient (leading into a local & shallow minimum).
When I started reading the Sequences, I started doing forecasting on Metaculus within 3 months (while still reading them). I think being grounded at that time in actually having to do reasoning with probabilities & receiving feedback in the span of weeks made the experience of reading the Sequences much more lasting to me. I also think that the lack of focus on any rationality verification made it significantly harder to develop an art of rationality. If you have a metric you have something to grind on, even if you abandon it later.
Heuristics for choosing/writing good textbooks (see also here):
Has exercises
Exercises are interspersed in the text, not in large chunks (better at the end of sections, not just at the end of chapters)
Solutions are available but difficult to access (in a separate book, or on the web), this reduces the urge to look the solution up if one is stuck
Of varying difficulty (I like the approach Concrete Mathematics takes: everything from trivial applications to research questions)
I like it when difficulty is indicated, but it’s also okay when it’s said clearly in the beginning that very difficult exercises that are not marked are mystery boxes
Takes many angles
Has figures and illustrations. I don’t think I’ve encountered a textbook with too many yet.
Has many examples. I’m not sure yet about the advantage of recurring examples. Same point about amount as with figures.
Includes code, if possible. It’s cool if you tell me the equations for computing the likelihood ratio of a hypothesis & dataset, but it’s even cooler if you give me some sample code that I can use and extend along with it.
Uses typography
You can use boldface and italics and underlining for reading comprehension, example here.
Use section headings and paragraphs liberally.
Artificial Intelligence: A Modern Approach has one-three word side-notes describing the content of each paragraph. This is very good.
Distinguish definitions, proofs, examples, case-studies, code, formulas &c.
Dependencies
Define terms before they are used. (This is not a joke. Population Genetics uses the term “substitution” on p. 32 without defining it, and exercise 12-1 from Naive Set Theory depends on the axiom of regularity, but the book doesn’t define it.)
If the book has pre-requisites beyond what a high-schooler knows, a good textbook lists those pre-requisites and textbooks that teach them.
Indicators
Multiple editions are an indicator for quality.
Ditto for multiple authors.
A conversational and whimsy style can be nice, but shouldn’t be overdone.
Hot take: I get very little value from proofs in math textbooks, and consider them usually unnecessary (unless they teach a new proof method). I like the Infinite Napkin for its approach.
Wishlist
Flashcard sets that come together with textbooks. Please.
3blue1brown style videos that accompany the book. From Zero to Geo is a great step in that direction.
- 7 May 2023 8:57 UTC; 2 points) 's comment on Properties of Good Textbooks by (
This post would be strongly improved by 3 examples of decisions you made differently due to this heuristic.