I’m under impression that conquest of the American continent was not a project of goodness, it was a project of conquest. Then, revolution happened (the way it happened), Washington resigning happened, and a whole lot of other things, and in the end, it could be argued (controversially), that, given on oracle into 2026 and a counterfactual world, a person guided by “goodness” would not oppose the colonial project.
Yes, this is a large part of the reason for this case-study in the post! “Most people involved do not seem in it for good reasons” is also one of those things that one might be tempted to use to give yourself license to just disengage from something, but it turns out, nope, people involved trying to do something good is not a necessary prerequisite for something turning out good!
(Indeed, this part seems very obvious to me. Any company that has done great things is of course centrally staffed by people who are primarily there because its a job that makes them money. Any evaluation of a social movement or a company or really any kind of social structure that tries to forecast its eventual harm on the basis of people within it not being on board with the mission seems pretty unlikely to get things right to me)
I don’t think we (currently, I do not make the case for me-living-in-1490s) live in a world of scarce opportunities to apply yourself to something. Yes, things that turn out to be good can sometimes not be done with good intentions of the majority of people doing them. But also, most (?) bad things are done by people with bad intentions (or bad beliefs). I would be surprised if in a world where you can choose to apply yourself to movements that exercise effort to have good intentions & good beliefs you wouldn’t be better off assigning some weight to this heuristic.
In other words, people involved trying to do something good is not a prerequisite, but I would guess it is a non-negligible predictor. (You don’t explicitly state that you disagree that it is an significant predictor, so maybe we agree on this actually, but your last paragraph seems to imply that taking mission alignment into account for harm forecasting does not improve its quality, which is a bit counterintuitive for me).
I do agree if your alternative is “having goals of a corpse” you probably should try to make American colonisation less-bad or its results more-good instead of just doing nothing.
Yes, this is a large part of the reason for this case-study in the post! “Most people involved do not seem in it for good reasons” is also one of those things that one might be tempted to use to give yourself license to just disengage from something, but it turns out, nope, people involved trying to do something good is not a necessary prerequisite for something turning out good!
(Indeed, this part seems very obvious to me. Any company that has done great things is of course centrally staffed by people who are primarily there because its a job that makes them money. Any evaluation of a social movement or a company or really any kind of social structure that tries to forecast its eventual harm on the basis of people within it not being on board with the mission seems pretty unlikely to get things right to me)
I don’t think we (currently, I do not make the case for me-living-in-1490s) live in a world of scarce opportunities to apply yourself to something. Yes, things that turn out to be good can sometimes not be done with good intentions of the majority of people doing them. But also, most (?) bad things are done by people with bad intentions (or bad beliefs). I would be surprised if in a world where you can choose to apply yourself to movements that exercise effort to have good intentions & good beliefs you wouldn’t be better off assigning some weight to this heuristic.
In other words, people involved trying to do something good is not a prerequisite, but I would guess it is a non-negligible predictor. (You don’t explicitly state that you disagree that it is an significant predictor, so maybe we agree on this actually, but your last paragraph seems to imply that taking mission alignment into account for harm forecasting does not improve its quality, which is a bit counterintuitive for me).
I do agree if your alternative is “having goals of a corpse” you probably should try to make American colonisation less-bad or its results more-good instead of just doing nothing.