Sorry to be clear, I don’t claim LW has overlooked these topics (except unawareness and alternatives to classical Bayesian epistemology, which I do think have been quite severely neglected). The reason I wrote this post was that the following claims seem non-obvious:
Thinking further about wisdom concepts these days is not just a distraction from “notkilleveryoneism”.
The concepts in the checklist do in fact seem to satisfy conditions (1)+(2) (the definition of “wisdom concepts”). (My impression is that it’s somewhat common for people to think many of the concepts I list admit “objective” answers (i.e. just believe and do what “works” / has the best empirical track record), which all sufficiently intelligent agents will converge to. ETA: Relatedly, it might not be salient to some readers that the answer to “is this decision a catastrophic mistake?” could be sensitive to all these topics.)
The sub-questions I list are open questions. (E.g., I expect it to be controversial that agents aren’t necessarily rationally required to avoid diachronic sure losses.)
Sorry to be clear, I don’t claim LW has overlooked these topics (except unawareness and alternatives to classical Bayesian epistemology, which I do think have been quite severely neglected). The reason I wrote this post was that the following claims seem non-obvious:
Thinking further about wisdom concepts these days is not just a distraction from “notkilleveryoneism”.
The concepts in the checklist do in fact seem to satisfy conditions (1)+(2) (the definition of “wisdom concepts”). (My impression is that it’s somewhat common for people to think many of the concepts I list admit “objective” answers (i.e. just believe and do what “works” / has the best empirical track record), which all sufficiently intelligent agents will converge to. ETA: Relatedly, it might not be salient to some readers that the answer to “is this decision a catastrophic mistake?” could be sensitive to all these topics.)
The sub-questions I list are open questions. (E.g., I expect it to be controversial that agents aren’t necessarily rationally required to avoid diachronic sure losses.)