But what I most want to read from you right now is an in-depth account of which the things in the world have gone or are going most right, and the ways in which you think metaphilosophical competence or consequentialist reasoning contributed to them.
(First a terminological note: I wouldn’t use the phrase “metaphilosophical competence”, and instead tend to talk about either “metaphilosophy”, meaning studying the nature of philosophy and philosophical reasoning, how should philosophical problems be solved, etc., or “philosophical competence”, meaning how good someone is at solving philosophical problems or doing philosophical reasoning. And sometimes I talk about them together, like in “metaphilosophy / AI philosophical competence” because I think solving metaphilosophy is the best way to improve AI philosophical competence. Here I’ll interpret you to just mean “philosophical competence”.)
To answer your question, it’s pretty hard to think of really good examples, I think because humans are very bad at both philosophical competence and consequentialist reasoning, but here are some:
the game theory around nuclear deterrence, helping to prevent large-scale war so far
economics and its influence on government policy, e.g., providing support for property rights, markets, and regulations around things like monopolies and externalities (but it’s failing pretty badly on AGI/ASI)
analytical philosophy making philosophical progress in so far as asking important questions and delineating various plausible answers (but doing badly as far as individually having inappropriate levels of confidence, as well as failing to focus on the really important problems, e.g., related to AI safety)
certain philosophers / movements (rationalists, EA) emphasizing philosophical (especially moral) uncertainty to some extent, and realizing the importance of AI safety
MIRI updating on evidence/arguments and pivoting strategy in response (albeit too slowly)
I guess this isn’t an “in-depth account” but I’m also not sure why you’re asking for “in-depth”, i.e., why doesn’t a list like this suffice?
I should also try to write up the same thing, but about how virtues contributed to good things.
I think non-consequentialist reasoning or ethics probably worked better in the past, when the world changed more slowly and we had more chances to learn from our mistakes (and refine our virtues/deontology over time), so I wouldn’t necessarily find this kind of writing very persuasive, unless it somehow addressed my central concern that virtues do not seem to be a kind of thing that is capable of doing enough “compute/reasoning” to find consistently good strategies in a fast changing environment on the first try.
To answer your question, it’s pretty hard to think of really good examples, I think because humans are very bad at both philosophical competence and consequentialist reasoning, but here are some:
If this is true, then it should significantly update us away from the strategy “solve our current problems by becoming more philosophically competent and doing good consequentialist reasoning”, right? If you are very bad at X, then all else equal you should try to solve problems using strategies that don’t require you to do much X.
You might respond that there are no viable strategies for solving our current problems without applying a lot of philosophical competence and consequentialist reasoning. I think scientific competence and virtue ethics are plausibly viable alternative strategies (though the line between scientific and philosophical competence seems blurry to me, as I discuss below). But even given that we disagree on that, humanity solved many big problems in the past without using much philosophical competence and consequentialist reasoning, so it seems hard to be confident that we won’t solve our current problems in other ways.
Out of your examples, the influence of economics seems most solid to me. I feel confused about whether game theory itself made nuclear war more or less likely—e.g. von Neumann was very aggressive, perhaps related to his game theory work, and maybe MAD provided an excuse to stockpile weapons? Also the Soviets didn’t really have the game theory IIRC.
On the analytical philosophy front, the clearest wins seem to be cases where they transitioned from doing philosophy to doing science or math—e.g. the formalization of probability (and economics to some extent too). If this is the kind of thing you’re pointing at, then I’m very much on board—that’s what I think we should be doing for ethics and intelligence. Is it?
Re the AI safety stuff: it all feels a bit too early to say what its effects on the world have been (though on net I’m probably happy it has happened).
I guess this isn’t an “in-depth account” but I’m also not sure why you’re asking for “in-depth”, i.e., why doesn’t a list like this suffice?
Because I have various objections to this list (some of which are detailed above) and with such a succinct list it’s hard to know which aspects of them you’re defending, which arguments for their positive effects you find most compelling, etc.
(First a terminological note: I wouldn’t use the phrase “metaphilosophical competence”, and instead tend to talk about either “metaphilosophy”, meaning studying the nature of philosophy and philosophical reasoning, how should philosophical problems be solved, etc., or “philosophical competence”, meaning how good someone is at solving philosophical problems or doing philosophical reasoning. And sometimes I talk about them together, like in “metaphilosophy / AI philosophical competence” because I think solving metaphilosophy is the best way to improve AI philosophical competence. Here I’ll interpret you to just mean “philosophical competence”.)
To answer your question, it’s pretty hard to think of really good examples, I think because humans are very bad at both philosophical competence and consequentialist reasoning, but here are some:
the game theory around nuclear deterrence, helping to prevent large-scale war so far
economics and its influence on government policy, e.g., providing support for property rights, markets, and regulations around things like monopolies and externalities (but it’s failing pretty badly on AGI/ASI)
analytical philosophy making philosophical progress in so far as asking important questions and delineating various plausible answers (but doing badly as far as individually having inappropriate levels of confidence, as well as failing to focus on the really important problems, e.g., related to AI safety)
certain philosophers / movements (rationalists, EA) emphasizing philosophical (especially moral) uncertainty to some extent, and realizing the importance of AI safety
MIRI updating on evidence/arguments and pivoting strategy in response (albeit too slowly)
I guess this isn’t an “in-depth account” but I’m also not sure why you’re asking for “in-depth”, i.e., why doesn’t a list like this suffice?
I think non-consequentialist reasoning or ethics probably worked better in the past, when the world changed more slowly and we had more chances to learn from our mistakes (and refine our virtues/deontology over time), so I wouldn’t necessarily find this kind of writing very persuasive, unless it somehow addressed my central concern that virtues do not seem to be a kind of thing that is capable of doing enough “compute/reasoning” to find consistently good strategies in a fast changing environment on the first try.
If this is true, then it should significantly update us away from the strategy “solve our current problems by becoming more philosophically competent and doing good consequentialist reasoning”, right? If you are very bad at X, then all else equal you should try to solve problems using strategies that don’t require you to do much X.
You might respond that there are no viable strategies for solving our current problems without applying a lot of philosophical competence and consequentialist reasoning. I think scientific competence and virtue ethics are plausibly viable alternative strategies (though the line between scientific and philosophical competence seems blurry to me, as I discuss below). But even given that we disagree on that, humanity solved many big problems in the past without using much philosophical competence and consequentialist reasoning, so it seems hard to be confident that we won’t solve our current problems in other ways.
Out of your examples, the influence of economics seems most solid to me. I feel confused about whether game theory itself made nuclear war more or less likely—e.g. von Neumann was very aggressive, perhaps related to his game theory work, and maybe MAD provided an excuse to stockpile weapons? Also the Soviets didn’t really have the game theory IIRC.
On the analytical philosophy front, the clearest wins seem to be cases where they transitioned from doing philosophy to doing science or math—e.g. the formalization of probability (and economics to some extent too). If this is the kind of thing you’re pointing at, then I’m very much on board—that’s what I think we should be doing for ethics and intelligence. Is it?
Re the AI safety stuff: it all feels a bit too early to say what its effects on the world have been (though on net I’m probably happy it has happened).
Because I have various objections to this list (some of which are detailed above) and with such a succinct list it’s hard to know which aspects of them you’re defending, which arguments for their positive effects you find most compelling, etc.