But what I most want to read from you right now is an in-depth account of which the things in the world have gone or are going most right, and the ways in which you think metaphilosophical competence or consequentialist reasoning contributed to them.
(First a terminological note: I wouldn’t use the phrase “metaphilosophical competence”, and instead tend to talk about either “metaphilosophy”, meaning studying the nature of philosophy and philosophical reasoning, how should philosophical problems be solved, etc., or “philosophical competence”, meaning how good someone is at solving philosophical problems or doing philosophical reasoning. And sometimes I talk about them together, like in “metaphilosophy / AI philosophical competence” because I think solving metaphilosophy is the best way to improve AI philosophical competence. Here I’ll interpret you to just mean “philosophical competence”.)
To answer your question, it’s pretty hard to think of really good examples, I think because humans are very bad at both philosophical competence and consequentialist reasoning, but here are some:
the game theory around nuclear deterrence, helping to prevent large-scale war so far
economics and its influence on government policy, e.g., providing support for property rights, markets, and regulations around things like monopolies and externalities (but it’s failing pretty badly on AGI/ASI)
analytical philosophy making philosophical progress in so far as asking important questions and delineating various plausible answers (but doing badly as far as individually having inappropriate levels of confidence, as well as failing to focus on the really important problems, e.g., related to AI safety)
certain philosophers / movements (rationalists, EA) emphasizing philosophical (especially moral) uncertainty to some extent, and realizing the importance of AI safety
MIRI updating on evidence/arguments and pivoting strategy in response (albeit too slowly)
I guess this isn’t an “in-depth account” but I’m also not sure why you’re asking for “in-depth”, i.e., why doesn’t a list like this suffice?
I should also try to write up the same thing, but about how virtues contributed to good things.
I think non-consequentialist reasoning or ethics probably worked better in the past, when the world changed more slowly and we had more chances to learn from our mistakes (and refine our virtues/deontology over time), so I wouldn’t necessarily find this kind of writing very persuasive, unless it somehow addressed my central concern that virtues do not seem to be a kind of thing that is capable of doing enough “compute/reasoning” to find consistently good strategies in a fast changing environment on the first try.
(First a terminological note: I wouldn’t use the phrase “metaphilosophical competence”, and instead tend to talk about either “metaphilosophy”, meaning studying the nature of philosophy and philosophical reasoning, how should philosophical problems be solved, etc., or “philosophical competence”, meaning how good someone is at solving philosophical problems or doing philosophical reasoning. And sometimes I talk about them together, like in “metaphilosophy / AI philosophical competence” because I think solving metaphilosophy is the best way to improve AI philosophical competence. Here I’ll interpret you to just mean “philosophical competence”.)
To answer your question, it’s pretty hard to think of really good examples, I think because humans are very bad at both philosophical competence and consequentialist reasoning, but here are some:
the game theory around nuclear deterrence, helping to prevent large-scale war so far
economics and its influence on government policy, e.g., providing support for property rights, markets, and regulations around things like monopolies and externalities (but it’s failing pretty badly on AGI/ASI)
analytical philosophy making philosophical progress in so far as asking important questions and delineating various plausible answers (but doing badly as far as individually having inappropriate levels of confidence, as well as failing to focus on the really important problems, e.g., related to AI safety)
certain philosophers / movements (rationalists, EA) emphasizing philosophical (especially moral) uncertainty to some extent, and realizing the importance of AI safety
MIRI updating on evidence/arguments and pivoting strategy in response (albeit too slowly)
I guess this isn’t an “in-depth account” but I’m also not sure why you’re asking for “in-depth”, i.e., why doesn’t a list like this suffice?
I think non-consequentialist reasoning or ethics probably worked better in the past, when the world changed more slowly and we had more chances to learn from our mistakes (and refine our virtues/deontology over time), so I wouldn’t necessarily find this kind of writing very persuasive, unless it somehow addressed my central concern that virtues do not seem to be a kind of thing that is capable of doing enough “compute/reasoning” to find consistently good strategies in a fast changing environment on the first try.