it requires a good handle of experiment design but biostatisticians do this day in day out. Hopefully risk analysts do this too in defense institutions.
The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.
This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.
Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it’s the best available means to make such an inference.
This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It’s also used by earth scientists, but I haven’t seen it used elsewhere. Based on this approach, analysts can:
make a prediction about outcomes without interventions in libya with and without intervention
when they choose to intervene on non-intervene, calculate those outcomes
over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch
under profound uncertainty about what would have happened if the alternative decision had been made?
I’m not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.
How self-referentially absurd. More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.). I said biostatisticians because epidemiology isn’t in the common vernacular. Ironically, counterfactual knowledge is, to those familiar with the distinction, distinctly removed from the biostatistical domain.
Just for the sake of intellectual curiosity, I wonder what kind of paradox was just invoked prior to this clarification.
It wouldn’t be the epimenides paradox since that refers to an individual making a self-referentially absurd claim:
The Epimenides paradox is the same principle as psychologists and sceptics using arguments from psychology claiming humans to be unreliable. The paradox comes from the fact that the psychologists and sceptics are human themselves, meaning that they state themselves to be unreliable
More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.)
Yes, Anders_H is Doctor of Science in Epidemiology. He’s someone worth listening to when he tells you about what can and can’t be done with experiment design.
Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.
This is a text conversation, so rhetorical questions aren’t immediately apparent. Moreover, we’re in a community that explicitly celebrates reason over other modes of rhetoric. So, my interpretation of his question about counterfactual conditions was interpreted was sincere rather than disingenuous.
Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.
Yes, but if you disagree you can’t simply point to biostatisticians do this day in day out and a bunch of wikipedia articles but actually argue the merits of why you think that those techniques can be used in this case.
take your pick
it requires a good handle of experiment design but biostatisticians do this day in day out. Hopefully risk analysts do this too in defense institutions.
The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.
This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it’s the best available means to make such an inference.
Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It’s also used by earth scientists, but I haven’t seen it used elsewhere. Based on this approach, analysts can:
make a prediction about outcomes without interventions in libya with and without intervention
when they choose to intervene on non-intervene, calculate those outcomes
over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch
I’m not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.
Just for completion, Anders_H is one of those guys.
How self-referentially absurd. More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.). I said biostatisticians because epidemiology isn’t in the common vernacular. Ironically, counterfactual knowledge is, to those familiar with the distinction, distinctly removed from the biostatistical domain.
Just for the sake of intellectual curiosity, I wonder what kind of paradox was just invoked prior to this clarification.
It wouldn’t be the epimenides paradox since that refers to an individual making a self-referentially absurd claim:
Anyone?
Yes, Anders_H is Doctor of Science in Epidemiology. He’s someone worth listening to when he tells you about what can and can’t be done with experiment design.
Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.
This is a text conversation, so rhetorical questions aren’t immediately apparent. Moreover, we’re in a community that explicitly celebrates reason over other modes of rhetoric. So, my interpretation of his question about counterfactual conditions was interpreted was sincere rather than disingenuous.
Yes, but if you disagree you can’t simply point to
biostatisticians do this day in day out
and a bunch of wikipedia articles but actually argue the merits of why you think that those techniques can be used in this case.