The Alchemy of Air (Hager):
+ did well when I fact checked it
+ is at about the right time period
+ is a very good case study of how science changed during that time period
_ is not about “welp, we’ve solved science” in particular
Can you clarify what you mean by “this moment”? Is Newton too early? Is the invention of antibiotics too late? Do you mean anything overlapping with the working life of Kelvin?
For me positive comments aren’t a big deal in a well-upvoted, uncriticized post, but create a buffer against the stress of harsh criticism in a way upvotes do not.
This is very helpful, thank you.
If a psychology study doesn’t prominently say who its subjects were, the answer is “undergrads at the university, predominantly those is psychology classes” and it is worthless.
Thanks; that is useful to know. I’m going to add a note asking other people if they have the same experience because that will make a big difference to me going forward.
habryka, if you’re inclined to invest more time in this: did you have the same experience with https://acesounderglass.com/2019/10/24/epistemic-spot-check-the-fate-of-rome-round-2/ (which I just now realize never went up on LW)? Trying to narrow down if it’s the book or the format.
Yes, this seems like a good guideline, although I can’t immediately formalize how I detect curiosity. Vague list of things this made me think of:
I think this is a better guideline for books than scientific articles, which are heavily constrained by academic social and funding norms.
One good sign is if *I* feel curious in a concrete way when I read the book. What I mean by concrete is...
e.g. Fate of Rome had a ton of very specific claims about how climate worked and how historical climate conditions could be known. I spent a lot of time trying to verify these and even though I ultimately found them insufficiently supported, there was a concreteness that I still give positive marks for.
In contrast my most recently written epistemic spot check (not yet published), I spent a long time on several claims along the lines of “Pre-industrial Britain had a more favorable legal climate for entrepreneurship than continental Europe”. I don’t recall the author giving any specifics on what he meant by “more favorable”, nor how he determined it was true. Investigating felt like a slog because I wasn’t even sure what I was looking for.
I worry I’m being unfair here because maybe if I’d found lots of other useful sources I’d be rating the original book better. But when I investigated I found there wasn’t even a consensus on whether Britain had a strong or weak patent system.
Moralizing around conclusions tends to inhibit genuine curiosity in me, although it can loop around to spite curiosity (e.g., Carol Dweck).
if there are two similar papers from way in the past that you found via Google Scholar and one of them has 10x the citations of the other, take that into account.
This seems great for figuring out the consensus in a field, but not for identifying when the consensus is wrong.
One tactic I like to use is “how do they know this?”, and asking myself or investigating if it’s possible for their answer to demonstrate the thing they’re claiming.
A lot of work doesn’t tell you. Those aren’t necessarily wrong, because they might have a good answer they’re not incentivized to share, but at a minimum it’s going to make it hard to learn from the work.
A lot of work claims to tell you, but when you look they are lying. For example, when I investigated the claim humans could do 4 hours of thought-work per day, I looked up the paper’s citations, and found they referred to experiments of busy work. Even if those studies were valid, they couldn’t possibly prove anything about thought-work. I consider “pretending to have sources and reasons” a worse sin than “not giving a source or reason”
More ambiguously, I spent a lot of time trying to figure out how much we could tell and at what resolution from ice core data. I still don’t have a great answer on this for the time period I was interested in. But I learned enough to know that the amount of certainty the book I was reading (The Fate of Rome) was presenting data as more clear cut than it was.
On the other end, The Fall of Rome spends a lot of time explaining why pottery is useful in establishing economic and especially trade status of an area/era. This was pretty hard to verify from external sources because it’s original research from the author, but it absolutely makes sense and produces a lot of claims and predictions that could be disproved. Moreover, none of the criticism I fond of Fall of Rome addressed his points on pottery- no one was saying “well I looked at Roman pottery and think the quality stayed constant through the 600s”.
This seems like you’re defining “depriving half the population of agency” as not requiring or being violence
I guess one thing you might be able to do is to check arguments, as opposed to statements of facts
First, let me say I think that would be interesting to experiment with. But the reasons to be dubious are more interesting, so I’m going to spend more time on those.
This can definitely rule people out. I don’t think it can totally rule people in, because there’s always a risk someone made a sound argument based on faulty assumptions. In fact this is a large, sticky genre that I’m very worried about
But assuming that was solved, there’s something I find harder to express that might be at the core of why I’m doing this… I don’t want to collect a bunch of other people’s arguments I can apply as tools, and be confused if two of them conflict. I want a gears-level model of the world such that, if I was left with amnesia on an intellectual deserted island, I could re-derive my beliefs. Argument-checking as I conceive of it now more does the former. I can’t explain why, exactly what I’m picturing when I say argument checking or what kind if amnesia I mean, but there’s something there. My primary interest with argument-checking would be to find a way to engage with arguments in a way that develops that amnesia-proof knowledge.
How are you defining society and progress?
I’ve found Anki really terrible for learning, even for simple things like vocabulary; what it does is help me remember things I’ve already at least half-learned.
We have a round running now, it closes on the 27th. Instructions are here. There are prizes of up to $65 per question, and it also helps us out for the planned BIG tournament later.
Thanks, that’s what I was going for :)
You can see all the past ESCs at https://acesounderglass.com/tag/epistemicspotcheck/ . The current top post is about upcoming changes in the system.