I’ve noticed that there are two major “strategies of caring” used in our sphere:
Soares-style caring, where you override your gut feelings (your “internal care-o-meter” as Soares puts it) and use cold calculation to decide.
Carlsmith-style caring, where you do your best to align your gut feelings with the knowledge of the pain and suffering the world is filled with, including the suffering you cause.
Nate Soares obviously endorses staring unflinchingly into the abyss that is reality (if you are capable of doing so). However, I expect that almost-pure Soares-style caring (which in essence amounts to “shut up and multiply”, and consequentialism) combined with inattention or an inaccurate map of the world (aka broken epistemics) can lead to making severely sub-optimal decisions.
The harder you optimize for a goal, the better your epistemology (and by extension, your understanding of your goal and the world) should be. Carlsmith-style caring seems more effective since it very likely is more robust to having bad epistemology compared to Soares-style caring.
(There are more pieces necessary to make Carlsmith-style caring viable, and a lot of them can be found in Soares’ “Replacing Guilt” series.)
Does this come from a general idea of “optimizing hard” means higher risk of damage caused by errors in detail, and “optimizing soft” has enough slack so as not to have the same risks, but also soft is less ambitious and likely less effective (if both are actually implemented well)?
a general idea of “optimizing hard” means higher risk of damage caused by errors in detail
Agreed.
“optimizing soft” has enough slack so as not to have the same risks, but also soft is less ambitious and likely less effective
I disagree with the idea that “optimizing soft” is less ambitious. “Optimizing soft”, in my head, is about as ambitious as “optimizing hard”, except it makes the epistemic uncertainty more explicit. In this model of caring I am trying to make more legible, I believe that Carlsmith-style caring may be more robust to certain epistemological errors humans can make that can result in severely sub-optimal scenarios, because it is constrained by human cognition and capabilities.
Note: I notice that this can also be said for Soares-style caring—both are constrained by human cognition and capabilities, but in different ways. Perhaps both have different failure modes, and are more effective in certain distributions (which may diverge)?
Backing up a step, because I’m pretty sure we have different levels of knowledge and assumptions (mostly my failing) about the differences between “hard” and “soft” optimizing.
I should acknowledge that I’m not particularly invested in EA as a community or identity. I try to be effective, and do some good, but I’m exploring rather than advocating here.
Also, I don’t tend to frame things as “how to care”, so much as “how to model the effects of actions, and how to use those models to choose how to act”. I suspect that’s isomorphic to how you’re using “how to care”, but I’m not sure of that.
All that said, I think of “optimizing hard” as truly taking seriously the “shut up and multiply” results, even where it’s uncomfortable epistemically, BECAUSE that’s the only way to actually do the MOST POSSIBLE good. actually OPTIMIZING, you know? “soft” is almost by definition less ambitious, BECAUSE it’s epistemically more conservative, and gives up average expected value in order to increase modal goodness in the face of that uncertainty. I don’t actually know if those are the positions taken by those people. I’d love to hear different definitions of “hard” and “soft”, so I can better understand why they’re both equal in impact.
I predict this is not really an accurate representation of Soares-style caring. (I think there is probably some vibe difference between these two clusters that you’re tracking, but I doubt Nate Soares would advocate “overriding” per se)
I doubt Nate Soares would advocate “overriding” per se
Acknowledged, that was an unfair characterization of Nate-style caring. I guess I wanted to make explicit two extremes. Perhaps using the name “Nate-style caring” is a bad idea.
(I now think that “System 1 caring” and “System 2 caring” would have been much better.)
I’ve noticed that there are two major “strategies of caring” used in our sphere:
Soares-style caring, where you override your gut feelings (your “internal care-o-meter” as Soares puts it) and use cold calculation to decide.
Carlsmith-style caring, where you do your best to align your gut feelings with the knowledge of the pain and suffering the world is filled with, including the suffering you cause.
Nate Soares obviously endorses staring unflinchingly into the abyss that is reality (if you are capable of doing so). However, I expect that almost-pure Soares-style caring (which in essence amounts to “shut up and multiply”, and consequentialism) combined with inattention or an inaccurate map of the world (aka broken epistemics) can lead to making severely sub-optimal decisions.
The harder you optimize for a goal, the better your epistemology (and by extension, your understanding of your goal and the world) should be. Carlsmith-style caring seems more effective since it very likely is more robust to having bad epistemology compared to Soares-style caring.
(There are more pieces necessary to make Carlsmith-style caring viable, and a lot of them can be found in Soares’ “Replacing Guilt” series.)
Does this come from a general idea of “optimizing hard” means higher risk of damage caused by errors in detail, and “optimizing soft” has enough slack so as not to have the same risks, but also soft is less ambitious and likely less effective (if both are actually implemented well)?
Agreed.
I disagree with the idea that “optimizing soft” is less ambitious. “Optimizing soft”, in my head, is about as ambitious as “optimizing hard”, except it makes the epistemic uncertainty more explicit. In this model of caring I am trying to make more legible, I believe that Carlsmith-style caring may be more robust to certain epistemological errors humans can make that can result in severely sub-optimal scenarios, because it is constrained by human cognition and capabilities.
Note: I notice that this can also be said for Soares-style caring—both are constrained by human cognition and capabilities, but in different ways. Perhaps both have different failure modes, and are more effective in certain distributions (which may diverge)?
Backing up a step, because I’m pretty sure we have different levels of knowledge and assumptions (mostly my failing) about the differences between “hard” and “soft” optimizing.
I should acknowledge that I’m not particularly invested in EA as a community or identity. I try to be effective, and do some good, but I’m exploring rather than advocating here.
Also, I don’t tend to frame things as “how to care”, so much as “how to model the effects of actions, and how to use those models to choose how to act”. I suspect that’s isomorphic to how you’re using “how to care”, but I’m not sure of that.
All that said, I think of “optimizing hard” as truly taking seriously the “shut up and multiply” results, even where it’s uncomfortable epistemically, BECAUSE that’s the only way to actually do the MOST POSSIBLE good. actually OPTIMIZING, you know? “soft” is almost by definition less ambitious, BECAUSE it’s epistemically more conservative, and gives up average expected value in order to increase modal goodness in the face of that uncertainty. I don’t actually know if those are the positions taken by those people. I’d love to hear different definitions of “hard” and “soft”, so I can better understand why they’re both equal in impact.
I predict this is not really an accurate representation of Soares-style caring. (I think there is probably some vibe difference between these two clusters that you’re tracking, but I doubt Nate Soares would advocate “overriding” per se)
Acknowledged, that was an unfair characterization of Nate-style caring. I guess I wanted to make explicit two extremes. Perhaps using the name “Nate-style caring” is a bad idea.
(I now think that “System 1 caring” and “System 2 caring” would have been much better.)