Very interesting, and I think it mostly goes in the right direction—but I’m not very convinced by the arguments, mostly because I don’t think the analysis of causes of war is sufficient here.
For example, even within rational actor models, I don’t think you give enough credence to multi-level models of incentives for war, which I discussed a bit here. Leaders often are willing to play at brinksmanship or even go to war because it’s advantageous regardless of whether they win. A single case can illustrate: a dictator might go to war to prevent internal dissent, and in that case, even losing the war can be a rallying cry for him to consolidate power. An AI system might even tell people that, but it won’t keep him from making the decision if it’s beneficial to have a war. And even without a dictator, different constituencies will support or avoid war for reasons unrelated to whether the country is likely to win—because “good for the country overall” isn’t any single actor’s reason for any decision, and prediction services won’t (necessarily) change that.
Thanks for the comment and I enjoy reading the article! I basically agree with what you said and admit that I only get to touch a bit upon this important “multi-level interests problem” within the “domestic audience” section. I think it would depend a lot on (1) how diffused those war-relevant prediction services are and (2) the distribution of societal trust in them (e.g. whether they become politicalized), which would be country/context-specific and I did not come up with useful ways to further disentangle them on a general level.
Very interesting, and I think it mostly goes in the right direction—but I’m not very convinced by the arguments, mostly because I don’t think the analysis of causes of war is sufficient here.
For example, even within rational actor models, I don’t think you give enough credence to multi-level models of incentives for war, which I discussed a bit here. Leaders often are willing to play at brinksmanship or even go to war because it’s advantageous regardless of whether they win. A single case can illustrate: a dictator might go to war to prevent internal dissent, and in that case, even losing the war can be a rallying cry for him to consolidate power. An AI system might even tell people that, but it won’t keep him from making the decision if it’s beneficial to have a war. And even without a dictator, different constituencies will support or avoid war for reasons unrelated to whether the country is likely to win—because “good for the country overall” isn’t any single actor’s reason for any decision, and prediction services won’t (necessarily) change that.
Thanks for the comment and I enjoy reading the article! I basically agree with what you said and admit that I only get to touch a bit upon this important “multi-level interests problem” within the “domestic audience” section. I think it would depend a lot on (1) how diffused those war-relevant prediction services are and (2) the distribution of societal trust in them (e.g. whether they become politicalized), which would be country/context-specific and I did not come up with useful ways to further disentangle them on a general level.