Is it justifiable for non-experts to have strong opinions about Gaza?

Adam Zerner

Ok, so here’s the prompt that we’ll be discussing, which I proposed:

”I am of the opinion that the situation in Gaza is complicated and that a large majority of people are uninformed enough such that anything more than weak opinions is not justified.”

To get started, I’d like to propose the analogy of chess. If we wave our hands a little bit, we can (perhaps) say that people generally agree on what outcomes we are seeking. Something along the lines of “maximizing utility” or “human flourishing”. I think the disagreement is moreso over how we can achieve these outcomes.

I think the situation in Gaza is analogous to chess in the sense that if you move a piece, it isn’t too clear what the consequences of that move will be or whether the move will help you achieve your end goal. On the other hand, in a game like Tic-Tac-Toe, things are very simple, and you can in fact do quite a good job at foreseeing the consequences of your move.

At risk of being uncharitable, those who support an aggressive Israeli response, it almost feels like they’re arguing something analogous to: “How dare they capture one of our pawns like that! How unwarranted and monstrous of them! We can’t let them do that. We must recapture the piece that took our pawn, and go on the offensive to take enough of their pieces such that they won’t be able to capture any of our pieces in the future.”

And similarly, those who are against the aggressive Israeli response, it almost feels like: “Sure, Gaza shouldn’t have captured that pawn, but look at how lopsided Israel’s response is. They took our rooks already and are now going after our king!”

The analogy breaks down in various ways, but I like how it highlights the complexity and uncertainty. Chess is a complicated game, and it is difficult to predict the consequences of your moves. Especially for people who haven’t studied the game. Similar, there is a lot of complexity in the “game board” surrounding Jerusalem, and it is difficult to predict the consequences that various moves will have. Especially for people who haven’t studied the game.

Yair Halberstadt

So I think what you’re saying is: people are judging this war based on whether each sides actions feel justified, but actually you’d judge it based on what the consequences will be, and these are too complex to figure out?

Yair Halberstadt

If so then a few questions:

  1. Too complex for anyone to figure out? Or just for most people to? Who can figure it out?

  2. Couldn’t that be used to avoid judging anyone? Like couldn’t you say the same about Hamas’s initially assault as well? You don’t know what the long term consequences will be, so you can’t say if good or bad.

  3. Seemingly a lot of individual actions can be judged independently of the total consequences. Like if Israel deliberately targets civilians for revenge, that would presumably be unjustified even if the war itself achieved its goals?

Adam Zerner

So I think what you’re saying is: people are judging this war based on whether each sides actions feel justified, but actually you’d judge it based on what the consequences will be, and these are too complex to figure out?

No. The main thing I’m saying is that for non-experts, the consequences are difficult enough to figure out such that strong opinions about what the consequences will be are not justified. I’m not making a claim about what approach people take to judge the war.

Too complex for anyone to figure out? Or just for most people to? Who can figure it out?

I am not sure what to think about any of those questions. It feels analogous my opinions about Disputed Question in the field of zoology. I don’t know much about the field of zoology, and so I don’t really know how much expertise it takes to justifiably have a strong opinion.

Similarly, I don’t know much about fields like geopolitics and international relations to have a sense of how much expertise it’d take to justifiably have a strong opinion about the war in Gaza, or how rare it is to have the requisite amount of expertise.

Couldn’t that be used to avoid judging anyone? Like couldn’t you say the same about Hamas’s initially assault as well? You don’t know what the long term consequences will be, so you can’t say if good or bad.

I don’t think so. I think there are situations that are simple enough where you can be justified in judging someone. For example, if I went outside and punched a random stranger in the face, that’s a simple situation where it is easy and justifiable to judge me.

As for Hamas’s initial assault, I actually would question whether it would be justifiable to say with high confidence that the assault was “bad”. If it happened in some sort of simple, sand-boxed environment where the attack happens and it doesn’t lead to any other consequences, then I think that it would be clearly bad.

But in reality, it was an aggressive move taken on a complicated chess board, so to speak. It seems possible that they had reason to believe that the attack, while certainly causing some amount of bad, would end up doing more good in the long run.

Seemingly a lot of individual actions can be judged independently of the total consequences. Like if Israel deliberately targets civilians for revenge, that would presumably be unjustified even if the war itself achieved its goals?

I think individual actions should be judged in large part due to the intent of the individual. If Israel targeted civilians for the Joker-like reason of wanting to “see the world burn”, then that I think is clearly bad.

If they did so purely for revenge, well, I think that’s probably bad, but I’m also not super confident that revenge is a bad thing. Like, from behind a veil of ignorance, would I want to live in a world where people don’t seek revenge? I’m not sure. It seems complicated.

If Israel targeted civilians because they calculated that the pros would outweigh the cons, well, I could see that being valid as well. I don’t know enough about the “game board” to have high-confidence feelings here.


All of this said, I am trying to be cognizant of The Sin of Underconfidence here. Maybe I am committing that sin. I’m not sure.

For most of these questions I’m probably like 80-90% sure that the thing in question is “bad”. I’m just not at the sort of 99.99% confidence of “for all intents and purposes I know that this is bad”.

Yair Halberstadt

I agree that strong opinions about the consequences are unjustified for most people. But I don’t think that most people are making strong statements about what the consequences will be? Or at least if they are, they’re doing so as statements of ideology rather than even pretending to rationally predict the future (“Every single Hamas member will be killed”/​”Israel will be destroyed”). Of the people who are actually engaging at a simulacra level where facts matter at all, I think most have been very explicit they don’t know what the future will bring.

One area where I will agree people have been overconfident in predicting what will happen, is when they try comparing this conflict to other conflicts. E.g. - “crushing defeat of Germany and Japan utterly destroyed those ideologies, Israel must do the same”/​”American war on terror just created more terrorists, Israel is just going to make things worse”. The differences between all those examples are too many to count, and I don’t know of anyone with a good model of when brutal repression of a resistance movement works Vs when it doesn’t (there certainly are cases where it does—e.g. second Chechen war).

Yair Halberstadt

With regards to your second comment: in general where we have to make moral decisions in the face of uncertainty, we rely on deontology.

Will killing an abusive person make the world a better place? It’s impossible to know, and a utilitarian might end up tying themselves in knots trying to justify it. But we’ve got pretty good rules of thumb (like don’t kill) which get the correct answer 95% of the time, don’t require too much cognitive effort, and are harder to game. If you stick to them your maximum “goodness” is perhaps reduced, but your chances of making a catastrophic mistake is too.

The problem with war is that it’s a case where a lot of the deontological rules break down (don’t kill), and those we have left (fight for your country, don’t surrender) are likely to increase total misery rather than decrease it, at least insofar as they apply to both sides.

The various international treaties on war crimes try to restore some level of deontological ethics back to the field, but suffer from the adversarial nature of combat. If X is a war crime, then it’s in the interests of the enemy to ensure that you can’t win without doing X. This leaves the definitions often deliberately vague and toothless, and those that aren’t are often not ratified by countries that actually expect to fight wars.

So we’re back to pure utilitarian calculations for the morality of war, which as you rightly point out are very difficult to calculate. Now in practice some people need to make decisions anyway, but I think insofar as you don’t need to, no point investing too much effort in it.

Adam Zerner

But I don’t think that most people are making strong statements about what the consequences will be? Or at least if they are, they’re doing so as statements of ideology rather than even pretending to rationally predict the future (“Every single Hamas member will be killed”/​”Israel will be destroyed”). Of the people who are actually engaging at a simulacra level where facts matter at all, I think most have been very explicit they don’t know what the future will bring.

Huh. I don’t get that impression. I feel maybe 80% confident that most people are making strong statements about what the consequences would be.

One area where I will agree people have been overconfident in predicting what will happen, is when they try comparing this conflict to other conflicts.

Ah yeah, those are good examples. Agreed.

With regards to your second comment: in general where we have to make moral decisions in the face of uncertainty, we rely on deontology.

Will killing an abusive person make the world a better place? It’s impossible to know, and a utilitarian might end up tying themselves in knots trying to justify it. But we’ve got pretty good rules of thumb (like don’t kill) which get the correct answer 95% of the time, don’t require too much cognitive effort, and are harder to game. If you stick to them your maximum “goodness” is perhaps reduced, but your chances of making a catastrophic mistake is too.

Funny coincidence: I’m actually in the middle of another LessWrong dialog with someone else that is largely about this.

That perspective doesn’t really seem true to me. I think consequentialism as an approach is more than happy to advocate for leaning heavily on heuristics like “killing tends to lead to bad outcomes for society”. And I think that a lot of (rationalist and normie) people have in their minds eye (maybe subconsciously?) the goal of optimizing for consequences as opposed to the goal of following rules (deontology) or practicing virtues (virtue ethics).

Adam Zerner

Oh, also. Re: me not being totally sure that the people behind Hamas’s original attack are “bad guys”. Something happened to me this morning that made me think of it.

I was walking my dog with my girlfriend outside, and I saw a homeless person digging through a trash can. She had wire cutters in her hands. I got mad and said something like, “Oh god, look at that, she’s gonna go around stealing people’s bikes.” (I’m very anti-car and pro-bike, so it’s a bit of a sensitive subject for me.) My girlfriend said something like, “I’m surprised she’s upright. I saw her this morning on my way to work smoking something and she was stumbling around a lot.”

Then I realized what a desperate and horrific state she must be in. I’d feel confident in saying that if a normal, comfortable person went around stealing bikes for no good reason other than greed, that is a “bad thing” and reason to vilify them. But this homeless lady? I’m not so sure. I could imagine it being an unreasonable thing to expect her to not go around stealing bikes. And I could imagine it not being her fault that she is in this desperate situation to begin with.

Similarly, the initial Hamas attack on first glance seems like something that I’d want to vilify. And I do feel maybe 90% confident that it is worth vilifying. But I’m not totally sure. Like the homeless lady, I see it as possible that, given the desperate situation they’re in, the attacks were justifiable. Or at least not worth of being vilified for. And I think that it’s probably unreasonable for a non-expert to feel highly confident that the the attacks were worth of vilifying.

Yair Halberstadt

Huh. I don’t get that impression. I feel maybe 80% confident that most people are making strong statements about what the consequences would be.

Interesting, not sure how to resolve that. Certainly of the people I’ve spoken to here in Israel, everybody agrees the future is uncertain—even predicting what the government/​IDF will do is impossible never mind long term consequences.

On twitter, once we remove all the bots/​completely detached from reality stuff, I’ve seen far more moral condemnation of one side or the other, or claims about what’s happening, or about what one or the other sides strategy is, than I’ve seen discussion of what the future will actually bring. That’s the one area where I think on average, people aren’t actually espousing opinions that they have insufficient knowledge to actually hold (bar plenty of exceptions of course).

Yair Halberstadt

I think consequentialism as an approach is more than happy to advocate for leaning heavily on heuristics like “killing tends to lead to bad outcomes for society”.

That’s exactly my point. In theory consequentialism requires infinite compute. In practice, consequentialists rely on deontological style heuristics (just these rules aren’t arbitrary, they’re based on experience).

Yair Halberstadt

Like the homeless lady, I see it as possible that, given the desperate situation they’re in, the attacks were justifiable. Or at least not worth of being vilified for. And I think that it’s probably unreasonable for a non-expert to feel highly confident that the the attacks were worth of vilifying.

So I can definitely see a scenario where that would be the case. I think though in practice that doesn’t at all describe the situation in Gaza:

  1. This attack was well planned by the upper echelons of Hamas, and was not an impulsive decision made by Gazans or lower ranking Hamas members. In fact it was kept secret from them till the last minute.

  2. Whilst most Gazans live in poverty (and many in abject poverty), the upper echelons of Hamas are extremely wealthy and live in incredible luxury in Gaza. Whilst clearly with an agenda, this account has for years documented the luxury that exists in Gaza for members of the elite, and I haven’t seen anyone claim it’s fake. Those who made this decision were not economically desperate.

  3. Hamas is very well entrenched in Gaza. It has no need to take actions against Israel for it’s political survival.

  4. As I argue here, prior to October 7th it would actually have been very easy for Hamas to achieve peace and prosperity for Gazans (but not for Palestinians in general), by simply credibly renouncing violence in return for alleviating the blockade.

  5. It’s clear that Hamas’s actions were driven by a wider pan-palestinian ideology, rather than a reaction to their own desperate needs or those of their constituents.

Now that doesn’t on its own state that what they did was wrong without getting into the weeds of what their ideology was and whether the attack had a decent chance of succeeding at achieving that aim. But I think it does highlight why your comparison given here is not really applicable without quite a bit of massaging.

If anything though this supports your wider point. Making a strong claim about the legitimacy of Hamas’s actions requires a lot of knowledge about the situation the vast majority of people making such claims don’t have.

Adam Zerner

Interesting, not sure how to resolve that.

Yeah, me neither. I’d say let’s come back to it in the future if at all.

Now that doesn’t on its own state that what they did was wrong without getting into the weeds of what their ideology was and whether the attack had a decent chance of succeeding at achieving that aim. But I think it does highlight why your comparison given here is not really applicable without quite a bit of massaging.

If anything though this supports your wider point. Making a strong claim about the legitimacy of Hamas’s actions requires a lot of knowledge about the situation the vast majority of people making such claims don’t have.

Ah, good points. I agree.


So, it seems that we are agreeing a lot and that various threads of this conversation are closing up. Moving forward, I think it’d be cool to get into the weeds a bit about why exactly it is difficult to predict the consequences of various policies related to the Israel-Hamas war. What do you think?

Like, why is it not as simple as “Hamas struck first in a terrible way, so it is clearly morally justified for Israel to attack back.” And similarly, why it is not as simple as “Israel is causing so much collateral damage and hurting so many innocent people. Clearly that is a morally abhorrent thing to do.”

(I live in Portland, Oregon and you live in Israel. I know a lot of people who would argue the latter and I presume you know a lot of people who would argue the former. Well, maybe not. You had said before that your impression is that they are coming at it from a more deontological angle in the sense of “this is bad because it violates a deontological rule” as opposed to “this is bad because it will produce bad consequences”.)

Yair Halberstadt

I’m happy to continue with that route. But it seems to me that both examples you gave:

Like, why is it not as simple as “Hamas struck first in a terrible way, so it is clearly morally justified for Israel to attack back.” And similarly, why it is not as simple as “Israel is causing so much collateral damage and hurting so many innocent people. Clearly that is a morally abhorrent thing to do.”

Are deontological statements rather than consequentialist ones, so maybe there’s a miscommunication here?

Yair Halberstadt

Also as an aside—here’s an example of an article I read this morning making (IMO) way too strong predictions about what the outcome of this war will be: https://​​warontherocks.com/​​2023/​​12/​​reversing-americas-ruinous-support-for-israels-assault-on-gaza/​​

Adam Zerner

Hm. Maybe we should talk about this deontological vs consequentialist thing a little more.

It is possible to claim that something is immoral for:

  1. Deontological reasons

  2. Consequentialist reasons

  3. Some other reason

When I said “Like, why is it not as simple as...”, I phrased the statements in an ambiguous way. It’s not clear whether the claims of “morally justified” and “morally abhorrent” are due to (1), (2), or (3). By habit, I did have (2) in my mind though. And I understand that you think most people are making such statements with (1) in mind.

Actually—sorry for the left turn, I’m communicating my actual thought process here—I think (1), (2) and (3) are misleading. I think most of the time people’s actual line of thought looks like something of a mixture of deontology, virtue ethics, and consequentialism. And I think that there’s a good amount of consequentialist stuff in there. And, with respect to the war in particular, I think that people’s moral claims contain a good amount of consequentialist stuff. And so, I think there’s a good amount of unjustifiably confident predictions about the future happening.

It’s tangential to the main thread in our dialogue of—independent of whether people actually do make confident claims about the future in the context of the war—would such claims be justifiable. However, I think that this side thread of, descriptively, what people do, adds some nice color to the main thread, and so I think it’d be cool to spend some time pursuing it. What do you think?

If you would like to pursue it, I could elaborate on my thoughts here if you’d like. But if instead you’d like to take us down a particular path, we can do that as well.

Yair Halberstadt

Happy to take this wherever you want to—also given the very asynchronous nature of this conversation I think we can safely have two threads running at once as well.

I think I agree with you that when people make a statement about whether something is moral they are often mixing both deontology and consequentialism (+ a bit of virtue ethics and even plain old vibes). Perhaps our main disagreement is for the average person which one dominates—and I think that it heavily depends on the topic and framing, but at least for your average statement on this issue it maybe leans more deontological, but I could easily be convinced otherwise.

More relevantly even when they do mean it from a consequentialist perspective I don’t know if that necessarily means they’ve actually put 2+2 together and realised that means they’re making concrete predictions about the future. It’s possible that when pressed they’ll admit they don’t know what the future will bring.

E.g. here’s a theoretical conversation I might have with an Israeli:

Me: Aren’t you worried about the civilian toll of this war?

Them: but what choice do we have, Hamas hides behind human shields so we can’t eradicate Hamas without killing lots of civilians.

Me: so do you think we’ll manage to eradicate Hamas?

Them: I don’t know, I hope so, but they’re good at hiding and might just recruit more people once it’s all over.

In this (hypothetical) example they’ve failed to connect the dots that justifying killing civilians to eradicate Hamas relies on that being a successful strategy to do so. Now there’s a lot more nuts and bolts that would have to go into that discussion if we were having it for real (what other aims are there, what about decision theoretic issues, what are the differences in probabilities of outcomes, etc.), this is just to highlight a potential way the path from consequentialist ethics → confident predictions can break down. I hope that made sense.

Adam Zerner

Happy to take this wherever you want to—also given the very asynchronous nature of this conversation I think we can safely have two threads running at once as well.

I agree that from the perspective of me and you, but from the perspective of people who read this when we publish it, I think it proves a bad experience. Any thoughts on that? Are you ok with trying to keep things more linear?

Yair Halberstadt

Sure we can keep things linear

Adam Zerner

I remember hearing about descriptive ethics being a field of study, so I did some digging into it, hoping that there might be some good data on how people reason about moral situations that can help inform our conversation. Unfortunately, and to my surprise, I didn’t find much.

  • There is a Wikipedia page for descriptive ethics, but it’s pretty short and doesn’t really provide helpful links to other things.

  • I was pretty surprised to see that the Stanford Encyclopedia of Philosophy didn’t show any articles on descriptive ethics. I also tried searching for “comparative ethics” since Wikipedia says that’s another way that people refer to it as, but nothing there either.

  • My new favorite resource -- 1000-Word Philosophy—didn’t have any posts about it.

  • Web searches didn’t give me anything either.

However, Claude provided some helpful information:

Descriptive ethics is the study of people’s moral beliefs and how they actually behave when making moral decisions. Some key points about descriptive ethics:

  • It aims to describe how people make moral judgments, not prescribe how they should. It is more empirical than normative.

  • Researchers have found that in practice, people tend to use a mix of moral philosophies rather than adhering strictly to just one.

  • For example, people may consider consequences (utilitarianism), duties and principles (deontology), character traits (virtue ethics), and social norms all together when making moral judgments.

  • The mix of factors people use can vary for different issues or situations. For example, people may rely more on principles for clear-cut moral issues but more on consequences for complex dilemmas.

  • There can be inconsistencies between people’s stated moral philosophies and how they actually behave in real-world moral contexts. Lots of research examines this “judgment-action gap.”

  • Descriptive ethics challenges the idea that everyday morality neatly fits into textbook ethical theories. People blend principles in a complex, situational way.

So in summary, yes based on a wide body of empirical research, descriptive ethics suggests that in practice, people tend to use a pluralistic blend of moral philosophies rather than adhering to just one. The mix depends on the context and people don’t always behave consistently with their own stated philosophies.

Sometimes these LLMs make stuff up so I take it’s response with a grain of salt, but I think it’s at least moderately reliable.

Anyway, yeah, I think we agree that in practice people use a mix of approaches as opposed to just one.


More relevantly even when they do mean it from a consequentialist perspective I don’t know if that necessarily means they’ve actually put 2+2 together and realised that means they’re making concrete predictions about the future. It’s possible that when pressed they’ll admit they don’t know what the future will bring.

I think you make a really good point here. I’m realizing that this is more subtle than I initially was imagining. And I’m suspecting that we actually pretty much agree with each other. Before I thought that we had a disagreement about how much people are make moral judgements rooted in consequentialist thinking, but now I suspect that we don’t have this disagreement.

One place I’m coming from is that—especially in practice as opposed to in theory—people who apply deontology and virtue ethics spend a whole lot of time thinking about consequences.

Situations are frequently complex, and it’s not clear what deontological rule to follow. What do you do when there are various rules at play and some of them are recommending different conclusions or courses of action? Well, maybe you appeal to a sort of “higher level deontological rule” to resolve things. But 1) I think that often leads to a situation where you now run into a new conflict where there are various “higher level rules” that bump into each other. And 2) in practice, I suspect that people apply consequentialist logic. Like, “lets use this rule instead of that one because the former looks like it produces better consequences”.

Similar for virtue ethics.

But, as you say, perhaps this sort of thinking isn’t really happening at a conscious level. I’d go even further and say I’m moderately confident that it’s not, and that instead, when taking moral stances about the war in Gaza, people are leaning heavily on gut reactions and intuitive feel. In such a way that doesn’t consciously think too much about consequences.

I feel confused though. I’m not too sure about this. It seems plausible to me that there’s a lot more conscious thought about consequences. Like, “Israel invading Gaza and killing all of those civilians is clearly going to do wayyyy more harm than good. Therefore, it is monstrous of them and they should be vilified.” Or “Hamas’s terrorist attack is clearly going tod o wayyyy more harm than good. Therefore it is monstrous of them and they should be vilified.”

Yair Halberstadt

I think everything you just said is plausible, and I think there’s probably a huge range in reactions, both by different people, and by the same person at different times—I often catch myself thinking about aspects of the war in very different frames:

Sometimes I’ll be very FDT “if you react differently to people hiding behind human shields, that’ll encourage them to use human shields in the first place”

Sometimes I’ll be more deontological/​virtue ethics: “it’s critical Israel follows the rules of war and maintains purity of arms”

Sometimes more CDT: “there’s no way Israel’s campaign will save more lives than it kills, so how can it be justified if I value all lives equally”

And often it’ll be just vibes: I’ll see a stupid comment by one side or another, or a heart wrenching video, and that’ll swing my opinion one way or another.

These are just how I might approach it when I’m thinking about it, and then I often bring in the other perspectives. But when I don’t have time to think about it deeply I’ll go either based on these initial frames or cached thoughts from earlier ruminations.

All of which is to say it’s probably going to be difficult to generalize.

That said, I think we both agree that there definitely is a strong aspect of consequentialist thinking in peoples thoughts, and that this consequentialist thinking is often not based on sound reasoning, even if we’re not certain of how significant this contribution is.

Adam Zerner

That makes all makes sense.

I think this painted some nice color in preparation for us diving more deeply into the weeds of why it isn’t justified to have a high degree of confidence in whether the consequences of a given policy will be good. Wanna move on to that now?

Yair Halberstadt

Let’s go!

Adam Zerner

Awesome. So, to be concrete, let’s look at Israel’s war on Gaza. Can a non-expert be confident that this war will do more harm than good? Good than harm? (For the sake of this discussion, let’s assume that we’re not valuing Israeli lives more than Palestinian lives or than the lives of people elsewhere.)

Actually, I feel like it’d be better if you took the lead on this. You seem to know more about it than I do. What are your thoughts?

Yair Halberstadt

I think part of the question is compared to what hypothetical?

Like we could imagine a world where Israel didn’t go to war after the attacks on October 7th, but that’s just not a realistic world. If we’re imagining that world, why not just imagine that everyone agrees to live in peace happily for ever after?

What do I mean by “that’s not a realistic world”? Basically there’s no individual person or small group of people who could have changed that outcome. After 1300 people were killed, 240 kidnapped, and 200,000 displaced, every Israeli expected war, and if the government hadn’t gone to war there would likely have been a revolution or internal crisis of some sorts.

Now the exact format of the war had some broad ability to change within that constraint, but I’m not an expert on either how wide those constraints are, or what the outcome of different strategies would be.

So I think a more relevant question is to think in concrete terms:

Either “what should I be lobbying the US government to do”, or “what should I be lobbying the Israeli government to do” or something like that. Or even more specifically: should the USA give/​sell munitions to Israel.

Now these are actually much more involved questions than the moral correctness of the war—there’s a lot of other things to consider, like how a refusal to provide an ally with munitions during war will affect US relations with other countries who might no longer trust the US to supply them with weapons, and instead look eastward to Russia or China to fill that niche.

I’m interested what question most interests you?

Yair Halberstadt

And more broadly: saying something is immoral because there exists better options is an impossibly high standard, which every policy will fail. Consequentialism only allows you to compare two options, and say which is better, not to give an absolute yes/​no to whether a specific option is moral.

Adam Zerner

And more broadly: saying something is immoral because there exists better options is an impossibly high standard, which every policy will fail.

I’d like to tease things apart a bit here.

  1. It’s one question ask whether the individual people who pushed for Israel to go to war after the October 7th attacks did something immoral.

  2. It’s another to ask whether non-experts are justified in feeling confident that the consequences of Israel going to war will be good or bad.

  3. It’s another to ask what citizens like us should push our governments to do.

I was hoping to discuss (2), but it sounds like you are disputing whether (2) is a useful or interesting question to discuss. Is that true? If so, how about we proceed by discussing whether or not it is a useful or interesting question to discuss?

Yair Halberstadt

I’m happy to discuss: “what will the consequences be of Israel going to war”, and then we can loop back to the other questions. I was just objecting to the framework of “will it kill more lives than it saves” because that inherently assumes compared to something.

Adam Zerner

Gotcha. Would you like to take the lead on this question of why we think it isn’t justifiable for non-experts to have strong opinions about these consequences, or would you prefer me to?

Yair Halberstadt

So the way I see it broad possible outcomes are:

On Hamas

  1. Israel destroys Hamas as a powerful organisation in Gaza, (it remains as a smaller organisation/​s + ideology, and remains in the West Bank but isn’t really relevant beyond the occasional terrorist attack. 20%

  2. Hamas remains in Gaza, but is significantly diminished, and forced to change aspects of its ideology/​strategy. It’s no longer able to be sole governor of Gaza. 60%

  3. Hamas remains in power as governor of Gaza much the same as before. It quickly rearms. 20%

On Israeli occupation of Gaza

  1. Israel occupies all of Gaza, and establishes a secure government more to their liking, then pulls out. 15%

  2. Israel occupies all of Gaza, then pulls out, either without establishing a government at all, or a weak one which soon collapses, or are forced to accept one they don’t like. 20%

  3. Israel indefinitely occupies all of Gaza. This counts if it is able to raid any part of Gaza when it wants to, even if it only regularly has boots on the ground in part/​none of Gaza. 30%

  4. Israel indefnitely occupies part of Gaza. 20%

  5. Israel pulls out of Gaza completely. 15%

On hostages

  1. Most remaining hostages are released as part of a deal. 60%

  2. Most remaining hostages are freed during the war. 10%

  3. Most remaining hostages are killed during the war. 20%

  4. Most remaining hostages remain so indefinitely. 10%

On ethnic cleansing

  1. Citizens of Gaza move en masse to other countries. 5%

  2. Citizens of Gaza are not allowed to return to northern Gaza, but do not move en masse to other countries. 5%

  3. Citizens of Gaza can return to most of Gaza, minus small security buffers. 50%

  4. Citizens of Gaza can return to all of Gaza. 40%

My predicted final death toll for the war is 20,000 − 40,000 civilians, and 10,000 − 20,000 combatants.

As you can see, a pretty wide range of outcomes here, without even getting into the finer details. I don’t think many people outside Israel could give more reliable estimates, but I imagine a lot of people here who are in relevant positions in the army/​government could give much more specific probabilities for some of these questions.

Adam Zerner

I appreciate the concreteness here. Of enumerating through possible outcomes and of assigning probabilities to them.

It all sounds like it’s probably reasonable enough, although I’m not really sure.

I’m realizing that a lot of my thinking here is that it seems plausible that there are a lot of second-order effects. And third-order, fourth-order, and nth-order effects.

For example, suppose that Hamas as a powerful organization is destroyed. I’m sure there are a lot of direct effects of that such as less conflict in that region. But it seems plausible to me that there is a second-order effect of more dislike of Israel. Which could have third-order effects involving the United States. Which could lead to nth-order effects involving NATO.

The first-order effects seem difficult enough to predict, but the n-th order effects seem both a) way more difficult to predict, and b) plausibly very large in magnitude. That combination of (a) and (b) makes me think that it’s quite difficult to be confident about consequences here.

I’m also realizing that I don’t have a great grasp on the first, second, and nth order effects and am having trouble getting more concrete here. If you have further concrete things to say here, I welcome them.

Yair Halberstadt

Oh I absolutely agree that there’s likely to be lots of further order effects, and these effects are difficult to predict. They include:

Small scale terrorism, both in Gaza and the West bank is likely to increase (but by how much?).

Large scale terrorism is likely to decrease the more successful Israel is at subduing Hamas , both because they’ve subdued Hamas, but also because it acts as deterrence.

The war will possibly strain Israeli relations with Arab states. OTOH it looks like the Saudis are gearing up to normalise relations with Israel either way (based on their anti Hamas propaganda).

It will definitely affect election outcomes in Israel, and possibly the USA.

It will increase worldwide sympathy for the Palestinian cause.

It will likely decrease Gazan support for Hamas.

It will likely rehabilitate the PAs image as a peaceful organisation.

It’s likely to decrease UNRWA support.

It’s likely to decrease UNIFIL support.

It may lead to a war with Lebanon.

It may ease or tighten the blockade on Gaza.

It may increase or decrease the chance of a Palestinian state.

There’s also tons of domestic second order effects but they’re less interesting to you (e.g likely to increase Hareidi participation rate in the army).

I’m not even going to try to estimate the chances and magnitudes if these effects.

Adam Zerner

Yeah.

I thought I might have more detailed things to say here, but I’m finding myself coming up empty. Sorry. For someone to argue something like “Israel’s going to war against Hamas in Gaza is clearly going to have good/​bad overall consequences” seems very tough to defend.

Is there anything you want to hit on or circle back to? If not, I’m happy to wrap up.

Yair Halberstadt

I think that this is completely ignoring the FDT aspect of things.

I.e. I agree it’s impossible to say with certainty that the war will have good/​bad consequences, but that’s an impossible tack for a country to take as a foreign policy.

Otherwise you can arrange things such that any response will have dubious consequences and do what you like with impunity—as indeed the Houthis are trying to do in the red sea.

A foreign policy where it’s clear you will try your goddamn hardest to destroy the abilities of anyone who seeks to attack you, and damn the consequences, makes it less likely for anyone to attack you in the first place.

Adam Zerner

I think that this is completely ignoring the FDT aspect of things.

I’ve never had a great grasp of the different decision theories, but this doesn’t seem true to me. For example:

A foreign policy where it’s clear you will try your goddamn hardest to destroy the abilities of anyone who seeks to attack you, and damn the consequences, makes it less likely for anyone to attack you in the first place.

I see this as part of the consequentialist calculus. Plausibly a pretty big part, in fact.

As you say, by taking the “damn the consequences” approach (which, to be clear, is more like “damn the fuzzier, nth-order, harder to identify consequences”), it probably has the consequence of making it less likely for anyone to attack you...

Well, you say “in the first place”. I’m thinking “in the future”. “In the future” definitely seems like it is not something that the consequentialist calculus would ignore. However, “in the first place” does seem like something that it would ignore. Did you mean to say “in the first place”, or “in the future”?

Yair Halberstadt

I meant in the first place. That is the chief difference between FDT and CDT. CDT only looks forward, but FDT looks at how my actions would have influenced the decisions that have already been made as well.

So the difference here is that it is plausible that Israel could prevent similar attacks at lower cost by beefing up border security. But if Hamas knew Israel would react that way, there’s no incentive for them not to attack. So Israel implicitly pre-commits to a harsh response in such a situation. Now that Hamas has attacked, if Israel wants to respect it’s implicit pre-commit, it has to respond.

People tend to do follow such implicit pre-commitments, even when the game isn’t repeated—e.g in the ultimatum game.

Adam Zerner

Hm. I have spent time trying to understand decision theories, but they’ve never really made sense to me, so please excuse any naivete here. I’m excited about discussing it though. This is a nice, concrete example, which I think makes it both easier and more interesting to discuss the decision theory stuff.

Here is how I am thinking about it.

Scenario #1:

  • At t=1, Hamas attacks Israel.

  • At t=2, a) Israel pursues a harsh response and b) commits to similarly harsh responses in the future.

In this scenario, the t=2 decision to commit to further harsh responses (b) does not affect the the t=1 action of Hamas attacking Israel. However it does (probably) affect t>2 actions of Hamas and other groups that may consider attacking Israel.

Scenario #2:

  • In a counterfactual world, at t=0, before Hamas attacked Israel at t=1, Israel commits to harsh responses to such attacks.

We don’t know what would happen at t=1 here, but it seems plausible that the Hamas attacks wouldn’t have happened due to the anticipation of a harsh response.

But I don’t see how scenario #2 is relevant.

Yair Halberstadt

So we agree that if Israel can credibly precommit at t=0 to a harsh response, that’s good for them. But in practice it’s hard to precommit to every posible situation. It’s true that Israel never made the statement “if you kill a thousand of our civilians in a terrorist attack we will invade Gaza”. And Hamas knows this. So under standard CDT, they can safely get away with attacking Israel, because it’s hasn’t precommitted, and by the time they’ve carried out the attack there’s no point in Israel responding.

Under FDT you say things would clearly be much better if I didn’t have to explicitly precommit every time. I need to be the sort of entity that can be assumed will respond harshly to any attacks.

Now you might point out that that’s all very well, but this clearly didn’t work—after all Hamas attacked anyway, so what’s the point of responding? But if you then decide based on that not to respond, Hamas could have predicted that, and that might have been the whole reason they attacked.

So the only way to do this is to actually be the sort of entity who genuinely, for real, will actually respond to attacks, even when it’s not worth it. And that means that when someone does actually attack you in the real world, you do respond even though the benefits aren’t worth it.

Note this is a bit of botched explanation, missing out a lot of finer details. For example under CDT you can’t even precommit, since there’s no incentive to keep to the precommitment later. Instead you need to e.g. sign a contract with another country that you will give them 10 billion dollars if someone attacks you and you don’t respond. Also there’s a lot of technical details of exactly how you decide what to do (and lots of unresolved questions).

This is of course completely ignoring the repeated game nature of things, where even under CDT it may be worth responding harshly.

Adam Zerner

Now you might point out that that’s all very well, but this clearly didn’t work—after all Hamas attacked anyway, so what’s the point of responding? But if you then decide based on that not to respond, Hamas could have predicted that, and that might have been the whole reason they attacked.

Hamas’ attack happened at t=1. The question at hand is what to do at t=2. I think we agree that Israel can plausibly reduce future attacks at t>2 by responding aggressively to Hamas at t=2.

However, it sounds like you might be saying that at t=2, Israel’s response could affect Hamas’ decision to attack at t=1. Is that true? If so, my intuition says that would be impossible. Would you mind expanding on why you think it would happen?

Yair Halberstadt

Let’s take a simpler scenario. I promise that if you steal from me, I’ll chase you down and get the money back, no matter the cost to me. You do indeed steal from me. What should I do now? Assume noone else will ever know my decision, so we don’t have to worry about future deterrence.

Adam Zerner

I think it’s helpful to outline these situations with timelines. So you’re saying that:

  • t=0 you commit to chasing down.

  • t=1 I steal from you.

  • t=2 you are faced with a decision of whether or not to chase down, and no one will ever know about it.

To try to respond to the spirit of what I think you’re asking, I think at t=2 it wouldn’t make sense to chase down (I’ll assume the immediate costs of the chase down are larger than the benefits and that there isn’t a need to deter me from future theft or any other more complex things to consider).

Yair Halberstadt

Great, but knowing this, you have no incentive not to steal from me. So is there anything better I can do at t=0?

Adam Zerner

Just making the commitment more believable.

Yair Halberstadt

How?

Yair Halberstadt

One option is for me to e.g. have a few pints of beer, such that at t=2 I’ll no longer be thinking logically, and will chase you down, damn the consequences.

So it sounds like it’s better for me to make myself less logical at t=2?

FDT essentially says that there’s no reason for me to have to be drunk to do this, I can recognise that being the sort of person who will chase after thieves is a good thing to be, and use self control to chase after the thieves even when there’s nothing in it for me.

Applied to Israel, it’s better for Israel to be the sort of entity that responds harshly to attacks, even when that response is not worth the costs. Now it’s true that in the situation that Israel finds itself, if it was thinking ‘logically’ it might decide not to attack, but that’s irrelevant because Israel at t=2 has no agency. At t=0 it self modified to ensure it would respond regardless. And that was the logical thing for Israel to do at t=0.

Adam Zerner

How?

One option is for me to e.g. have a few pints of beer, such that at t=2 I’ll no longer be thinking logically, and will chase you down, damn the consequences.

So it sounds like it’s better for me to make myself less logical at t=2?

Yes, if at t=0 you were somehow able to persuade me that at t=2 you would be in some sort of impulsive and less logical state of mind, that would make the t=0 commitment more believable, and thus more successful at deterring me from stealing from you.

But I’m not seeing the relevance of this. The original question was what you should do at t=2.

I can recognise that being the sort of person who will chase after thieves is a good thing to be, and use self control to chase after the thieves even when there’s nothing in it for me.

In a forward-looking sense, I agree that being that sort of person probably deters thieves, which plausibly might tip the scale in favor of being that sort of person. Like if you become that person at t = x, it will be helpful at t > x.

Perhaps where we disagree is that I don’t think becoming that type of person at t = x will affect things that happen at t < x. For example, suppose:

  • t = 0 you are not vengeful.

  • t = 1 I steal from you.

  • t = 2 you become vengeful.

My position is that your t = 2 action to become vengeful will affect things only at t > 2. Perhaps it will deter me and others from stealing from you at t > 2. However, my position is also that this t = 2 action will not affect things that happen at t < 2. Namely, it will not prevent me from having stolen from you at t = 1.

I’m not clear on what if any of that you disagree with, or on what if any important things there are that you think I am missing.

Yair Halberstadt

The simple, but less fundamental answer is that Israel does indeed have a long history of overreacting to what it sees as aggressions, and so can be seen to have so precommitted in the past.

Yair Halberstadt

The more fundamental answer is that if you assume that decision making is deterministic, then if I do decide to respond to Hamas’s attack at t=2, that could have been predicted in the past at t=0, and so at t=0 I was already the sort of entity that would do so. Whereas vice versa, if I don’t that could also have (in theory at least) been predicted at t=0 to. So in a very fundamental sense, yes, my decisions at t=2 do go back in time and make me vengeful at t=0.

Now you might ask what’s the point of doing this, if it won’t help me in this timeline where being vengeful at t=0 wasn’t sufficient. And the answer is that being the kind of entity that behaves this way does mean you do lose out more when things go wrong, but also means things go wrong less often, which usually ends up worth it. And by not choosing to seek vengeance at t=2, you’re essentially becoming the sort of entity that doesn’t seek vengeance, (since you’d make the same decision in every similar situation, and people can predict that), and things will go wrong more often.

Yair Halberstadt

Also I get the feeling I’m not very good at explaining this… Sorry. There’s definitely people who have talked about it more eloquently than me, would you appreciate me trying to find some articles and sending them to you?

Adam Zerner

So in a very fundamental sense, yes, my decisions at t=2 do go back in time and make me vengeful at t=0.

Hm. I disagree with this.

Also I get the feeling I’m not very good at explaining this… Sorry. There’s definitely people who have talked about it more eloquently than me, would you appreciate me trying to find some articles and sending them to you?

I don’t think it’s worth attempting to resolve the disagreement though. I have spent a good deal of time reading various things about decision theory, including most of the popular stuff on LessWrong, and it’s still not really clicking with me, so I’m feeling pessimistic that this will go anywhere. What do you think?

I do want to note that I recognize that lots of smart people feel differently, and that fact makes me feel not too confident about my position. The fact that “doing things now affects the past” seems very implausibly to me could very well just be failure on my part than something that stems from the truth.

I also want to note that this is probably a really strong point on our main thread about whether non-experts are justified in feeling confident about consequences with respect to the Israel-Gaza war! If this consequences depend on this (presumably to most people) mind-bending thing about FDT and “doing things now affects the past”, then, well, that doesn’t seem like the sort of thing that the average non-expert who has Opinions about this war would be able to reason about.

Yair Halberstadt

That all makes sense yes, we can agree to disagree here.

Adam Zerner

Cool. I’m ready to wrap up if you are. Anything else you want to discuss?

Yair Halberstadt

One final point. You said:

For the sake of this discussion, let’s assume that we’re not valuing Israeli lives more than Palestinian lives or than the lives of people elsewhere.

Now whilst I agree that that’s a good outside perspective to take, I don’t think it works as foreign policy, or you become to vulnerable to blackmail. If you value enemy civilian lives as much as your own soldiers, it’s very easy for the enemy to engineer a situation where the only way to save enemy civilians is to put your own soldiers lives at risk, by e.g. refusing to evacuate civilians. It creates generally bad incentives.

Adam Zerner

That makes sense, I agree.