I think it really depends on the situation. Ideally, you’d take the best argument on offer for both positions, but this assumes arguments for both positions are equally easy for you to find (with help from third parties, not necessarily optimizing [well] for you making good decisions). I think in practice I try to infer what the blind-spots and spin-incentives of the arguments I hear are, and try to think about what world we’d have to live in in order for these lines of arguments to be the ones which I end up hearing about via these sources.
Never do I do any kind of averaging or maximizing thing, and although what I said above sounds more complicated than saying “average!” or “maximize!”, it mostly just runs in the background, on autopilot, at this point, so it doesn’t take all that much extra time to implement. So I think its a false-dichotomy.
In some sense, one strong argument seems like it should defeat a bunch of weak arguments, but this assumes you’re in a situation you never actually find yourself in in real life[1]. In reality, once you have one strong argument and a bunch of weak arguments, now begins the process of seeing how far you can take the weak arguments and turn them into strong arguments (either by thinking them yourself or seeking out people who seem to be convinced by the weak-to-you versions of the arguments). And if you can’t do this, you should evaluate how likely you think it is you can make one of those weaker arguments stronger (either by some learned heuristics about what sorts of weak arguments are shadows of stronger ones, or looking at their advocates, or those who’ve been convinced, or the incentives involved, etc.).
Although the original text talks about policies very specifically, I think this is also the case when trying to reason about progressively more accurate abstractions of the world. What you’re really deciding on is which line of research inquiry to devote more thought to, with little expectation that either hypothesis on offer will be a truly general theory for the true hypothesis (even if—especially if—it is able to be developed into a truly general theory with a bit (or a lot) of work).
I think it really depends on the situation. Ideally, you’d take the best argument on offer for both positions, but this assumes arguments for both positions are equally easy for you to find (with help from third parties, not necessarily optimizing [well] for you making good decisions). I think in practice I try to infer what the blind-spots and spin-incentives of the arguments I hear are, and try to think about what world we’d have to live in in order for these lines of arguments to be the ones which I end up hearing about via these sources.
Never do I do any kind of averaging or maximizing thing, and although what I said above sounds more complicated than saying “average!” or “maximize!”, it mostly just runs in the background, on autopilot, at this point, so it doesn’t take all that much extra time to implement. So I think its a false-dichotomy.
In some sense, one strong argument seems like it should defeat a bunch of weak arguments, but this assumes you’re in a situation you never actually find yourself in in real life[1]. In reality, once you have one strong argument and a bunch of weak arguments, now begins the process of seeing how far you can take the weak arguments and turn them into strong arguments (either by thinking them yourself or seeking out people who seem to be convinced by the weak-to-you versions of the arguments). And if you can’t do this, you should evaluate how likely you think it is you can make one of those weaker arguments stronger (either by some learned heuristics about what sorts of weak arguments are shadows of stronger ones, or looking at their advocates, or those who’ve been convinced, or the incentives involved, etc.).
Although the original text talks about policies very specifically, I think this is also the case when trying to reason about progressively more accurate abstractions of the world. What you’re really deciding on is which line of research inquiry to devote more thought to, with little expectation that either hypothesis on offer will be a truly general theory for the true hypothesis (even if—especially if—it is able to be developed into a truly general theory with a bit (or a lot) of work).