In the future we’ll have so much better tools for exploring reasoning including various assumptions. [...] Why lock in these assumptions now instead of later, after we can answer such questions (and maybe much better ones people think up in the future) and let the answers inform our choices?
It might just be how my brain is calibrated: Even from a young age my interest in philosophy (and ethics in particular) was off the charts. I’m not sure I could have summoned that same energy and focus if I went into it with an attitude of “I should remain uncertain about most of it.” (In theory someone could hyperfocus on unearthing interesting philosophy considerations without actually committing to any of them—they could write entire books from a mode of “this is exploring the consequences of this one assumption which I’m genuinely uncertain about,” and then write other books on the other assumptions. But with the practical stakes feeling lower and without the mental payoff of coming to conclusions, it’s lot less motivating, certainly for me/my brain, but maybe even for humans in general?)
I don’t want to be uncertain about whether I should reason in a way that seems nonsensical to me. The choice between “irreducible normativity makes sense as a concept” and “irreducible normativity does not make sense as a concept” seems really clear to me. You might say that I’m going against many very smart people in EA/rationality, but I don’t think that’s true. On LW, belief in irreducible normativity is uncommon and arguably even somewhat frown upon. Among EAs, it’s admittedly more commmon, at least in the sense that there’s a common attitude of “we should place some probability on irreducible normativity because smart people take it seriously.” However, if you then ask if there are any EAs who take irreducible normativity seriously for object-level reasons and not out of deference, you’ll hardly find anyone! (Isn’t that a weird state of affairs?!) (Lastly on this, EAs often mention Derek Parfit as someone worth deferring to, but they did so before Volume III of On What Matters was even out, and before that Parfit had written comparatively little on metaethics, so that was again indirectly deferring rather than object-level-informed deferring.)
Opportunity costs; waiting/deferring is not cost-free: Different takes on “how to do reasoning” have different downstream effects on things like the values I would adopt, and those values come with opportunity costs. (I say more on this further down in this comment.)
>What if once we become superintelligent (and don’t mess up our philosophical abilities), philosophy in general will become much clearer, similar to how a IQ 100 person is way worse at philosophy compared to you? Would you advise the IQ 100 person to “locking in some basic assumptions about how to do reasoning” based on his current intuitions?
If it is as you describe, and if the area of philosophy in question didn’t already seem clear to me, then that would indeed convince me in favor of waiting/deferring.
However, my disagreement is that there are a bunch of areas of philosophy (by no means all of them—I have a really poor grasp of decision theory and anthropics and infinite ethics!) that already seem clear to me and it’s hard to conceive of how things could get muddled again (or become clear in a different form).
Also, I can turn the question around on you:
What if the IQ 200 version of yourself sees more considerations, but overall their picture is still disorienting? What if their subjective sense of “oh no, so many considerations, what if I pick the wrong way of going about it?” never goes away? What about the opportunity costs, then?
You might say, “So what, it was worth waiting in case it gave me more clarity; nothing of much value was lost in the meantime, since, in the meantime, we can just focus on robustly beneficial interventions under uncertainty.”
I have sympathies for that but I think it’s easier said than done. Sure, we should all move towards a world where EA would focus on outlining a joint package of interventions that captures low-hanging fruit from all plausible value angles, suffering-focused ethics as well (here’s my take). We’d discourage too much emphasis on personal values and instead encourage a framing of “all these things are important according to plausible value systems, let’s allocate talent according to individual comparative advantages.”
If you think I’m against that sort of approach, you’re wrong. I’ve written this introduction to multiverse-wide cooperation and I have myself worked on things like arguing for better compute governance and temporarily pausing AI, even though I think those things are kind of unimportant from an s-risk perspective. (It’s a bit more confounded now because I got married and feel more invested in my own life for that reason, but still...)
The reason I still talk a lot about my own values is because:
(1) I’m reacting to what I perceive to be other people unfairly dismissing my values (based on, e.g., metaethical assumptions I don’t share);
(2) I’m reacting to other people spreading metaethical takes that indirectly undermine my values (e.g., saying things like we should update to the values of smart EAs, who happen to have founder effects around certain values, or making the “option value argument” in a very misleading way;
(3) because I think unless some people are viscerally invested in doing the thing that is best according to some specific values, it’s quite likely that the “portfolio-based,” cooperative approach of “let’s together make sure that all the time-sensitive and important EA stuff gets done” will predictably miss specific interventions, particulary ones that don’t turn out to be highly regarded in the community, or are otherwise difficult or taxing to think about). (I’ve written here about ways in which people interested in multiverse-wide cooperation might fail to do it well for reasons related to this point.)
Overall, I don’t think I (and others) would have come up with nearly the same number of important considerations if I had been much more uncertain about my values and ways of thinking, and I think EA would be epistemically worse off if it had been otherwise.
The way I see it, this proves that there were opportunity costs compared to the counterfactual where I had just stayed uncertain.
Maybe that’s a human limitation, maybe there are beings who can be arbitrarily intellectually productive and focused on what matters most given their exact uncertainty distribution over plausible values, that they get up in the morning and do ambitious and targeted things (and make sacrifices) even if they don’t know much about exactly what their motivation ultimately points to. In my case at least, I would have been less motivated to make use of any potential that I had if I hadn’t known that the reason I got up in the morning was to reduce suffering.
One thing I’ve been wondering:
When you discuss these things with me, are you taking care to imagine yourself as someone who has strong object-level intuitions and attachments about what to value? I feel like, if you don’t do that, then you’ll continue to find my position puzzling in the same way John Wentworth found other people’s attitudes about romantic relationships puzzling, before he figured out he lacked a functioning oxytocin receptor. Maybe we just have different psychologies? I’m not trying to get you to adopt a specific theory of population ethics if you don’t already feel like doing that. But you’re trying to get me to go back to a state of uncertainty, even though, to me, that feels wrong. Are you putting yourself in my shoes enough when you give me the advice that I should go back to the state of uncertainty?
One of the most important points I make in the last post in my sequence is that forming convictions may not feel like a careful choice, but rather more like a discovery about who we are.
I’ll now add a bunch of quotes from various points in my sequence to illustrate what I mean by “more like a discovery about who we are”:
For moral reflection to move from an abstract hobby to something that guides us, we have to move beyond contemplating how strangers should behave in thought experiments. At some point, we also have to envision ourselves adopting an identity of “wanting to do good.”
[...]
[...] “forming convictions” is not an entirely voluntary process – sometimes, we can’t help but feel confident about something after learning the details of a particular debate.
[...]
Arguably, we are closer (in the sense of our intuitions being more accustomed and therefore, arguably, more reliable) to many of the fundamental issues in moral philosophy than to matters like “carefully setting up a sequence of virtual reality thought experiments to aid an open-minded process of moral reflection.” Therefore, it seems reasonable/defensible to think of oneself as better positioned to form convictions about object-level morality (in places where we deem it safe enough).
In my sequence’s last post I have a whole list of “Pitfalls of reflection procedures” about things that can go badly wrong, and a list on how “Reflection strategies require judgment calls,” where the meaning of “judgment call” is not that making the “wrong” decision would necessarily be catastrophic, but rather just that the outcome of our reflection might very well be heavily influenced by unavoidable early decisions that seem kind of arbitrary, and if that is the case and if we actually realize that we feel more confident about some first-order moral intuition than about which way to lean regarding the judgment calls of setting up moral reflection procedures (“how to get to IQ200” in your example), then it actually starts to seem risky and imprudent to defer to the reflection.
A few more quotes:
As Carslmith describes it, one has to – at some point – “actively create oneself.”
On why there’s not always a wager for naturalist moral realism (the wager applies only to people like you who start the process without object-level moral convictions).
Whether a person’s moral convictions describe the “true moral reality” [...] or “one well-specified morality out of several defensible options” [...] comes down to other people’s thinking. As far as that single person is concerned, the “stuff” moral convictions are made from remains the same. That “stuff,” the common currency, consists of features in the moral option space that the person considers to be the most appealing systematization of “altruism/doing good,” so much so that they deem them worthy of orienting their lives around. If everyone else has that attitude about the same exact features, then [naturalist] moral realism is true. Otherwise, moral anti-realism is true. The common currency – the stuff moral convictions are made from – matters in both cases.
[...]
Anticipating objections (dialogue):
Critic: Why would moral anti-realists bother to form well-specified moral views? If they know that their motivation to act morally points in an arbitrary direction, shouldn’t they remain indifferent about the more contested aspects of morality? It seems that it’s part of the meaning of “morality” that this sort of arbitrariness shouldn’t happen.
[...]
Critic: I understand being indifferent in the light of indefinability. If the true morality is under-defined, so be it. That part seems clear. What I don’t understand is favoring one of the options. Can you explain to me the thinking of someone who self-identifies as a moral anti-realist yet has moral convictions in domains where they think that other philosophically sophisticated reasoners won’t come to share them?
Me: I suspect that your beliefs about morality are too primed by moral realist ways of thinking. If you internalized moral anti-realism more, your intuitions about how morality needs to function could change.
Consider the concept of “athletic fitness.” Suppose many people grew up with a deep-seated need to study it to become ideally athletically fit. At some point in their studies, they discover that there are multiple options to cash out athletic fitness, e.g., the difference between marathon running vs. 100m-sprints. They may feel drawn to one of those options, or they may be indifferent.
Likewise, imagine that you became interested in moral philosophy after reading some moral arguments, such as Singer’s drowning child argument in Famine, Affluence and Morality. You developed the motivation to act morally as it became clear to you that, e.g., spending money on poverty reduction ranks “morally better” (in a sense that you care about) than spending money on a luxury watch. You continue to study morality. You become interested in contested subdomains of morality, like theories of well-being or population ethics. You experience some inner pressure to form opinions in those areas because when you think about various options and their implications, your mind goes, “Wow, these considerations matter.” As you learn more about metaethics and the option space for how to reason about morality, you begin to think that moral anti-realism is most likely true. In other words, you come to believe that there are likely different systematizations of “altruism/doing good impartially” that individual philosophically sophisticated reasoners will deem defensible. At this point, there are two options for how you might feel: either you’ll be undecided between theories, or you find that a specific moral view deeply appeals to you.
In the story I just described, your motivation to act morally comes from things that are very “emotionally and epistemically close” to you, such as the features of Peter Singer’s drowning child argument. Your moral motivation doesn’t come from conceptual analysis about “morality” as an irreducibly normative concept. (Some people do think that way, but this isn’t the story here!) It also doesn’t come from wanting other philosophical reasoners to necessarily share your motivation. Because we’re discussing a naturalist picture of morality, morality tangibly connects to your motivations. You want to act morally not “because it’s moral,” but because it relates to concrete things like helping people, etc. Once you find yourself with a moral conviction about something tangible, you don’t care whether others would form it as well.
I mean, you would care if you thought others not sharing your particular conviction was evidence that you’re making a mistake. If moral realism was true, it would be evidence of that. However, if anti-realism is indeed correct, then it wouldn’t have to weaken your conviction.
Critic: Why do some people form convictions and not others?
Me: It no longer feels like a choice when you see the option space clearly. You either find yourself having strong opinions on what to value (or how to morally reason), or you don’t.
--
I don’t think you should trust your object-level intuitions, because we don’t have a good enough metaethical or metaphilosophical theory that says that’s a good idea. If you think you do have one, aren’t you worried that it doesn’t convince a majority of philosophers, or the majority of some other set of people you trust and respect? Or worried that human brains are so limited and we’ve explored a tiny fraction of all possible arguments and ideas?
So far, I genuinely have not gotten much object-level pushback on the most load-bearing points of my sequence, so, I’m not that worried. I do respect your reasoning a great deal, but I find it hard what to do with your advice, since you’re not proposing some concrete alternative and aren’t putting your finger on some concrete part of my framework that you think is clearly wrong—you just say I should be less certain about everything, but I wouldn’t know how to do that and it feels like I don’t want to do it.
FWIW, I would consider it relevant if people I intellectually respect were to disagree strongly and concretely with my thoughts on how to go about moral reasoning. (I agree it’s more relevant if they push back against my reasoning about values rather than just disagreeing with my specific values.)
So far, I genuinely have not gotten much object-level pushback on the most load-bearing points of my sequence, so, I’m not that worried. I do respect your reasoning a great deal, but I find it hard what to do with your advice, since you’re not proposing some concrete alternative and aren’t putting your finger on some concrete part of my framework that you think is clearly wrong—you just say I should be less certain about everything, but I wouldn’t know how to do that and it feels like I don’t want to do it.
I’m not pushing back concretely because it doesn’t seem valuable to spend my time arguing against particular philosophical positions, each of which is only held by a minority of people, with low chance of actually changing the mind of any person I engage this way (see all of the interminable philosophical debates in the past). Do you really think it would make sense for me to do this, given all of the other things I could be doing, such as trying to develop/spread general arguments that almost everyone should be less certain?
I’m not well-positioned to think about your prioritization, for all I know you’re probably prioritizing well! I didn’t mean to suggest otherwise.
And I guess you’re making the general point that I shouldn’t put too much stake into “my sequence hasn’t gotten much in terms of concrete pushback,” because it could well be that there are people who would have concrete pushback but don’t think it’s worth commenting since it’s not clear if many people other than myself would be interested. That’s fair!
(But then, probably more people than just me would be interested in a post or sequence on why moral realism is true, for reasons other than deferring, so those object-level arguments should better be put online somewhere!)
And I guess you’re making the general point that I shouldn’t put too much stake into “my sequence hasn’t gotten much in terms of concrete pushback,” because it could well be that there are people who would have concrete pushback but don’t think it’s worth commenting since it’s not clear if many people other than myself would be interested. That’s fair!
Yeah, I should have framed my reply in these terms instead of my personal prioritization. Thanks for doing the interpretive work here.
(But then, probably more people than just me would be interested in a post or sequence on why moral realism is true, for reasons other than deferring, so those object-level arguments should better be put online somewhere!)
There must be a lot of academic papers posted online by philosophers who defend moral realism? For example Knowing What Matters by Richard Y Chappell (who is also in EA). There are also a couple of blog posts by people in EA:
But I haven’t read these and don’t know if they engage with your specific arguments against moral realism. If they haven’t, and you can’t find any sources that do, maybe write a post highlighting that, e.g., “Here are some strong arguments against moral realism that hasn’t been addressed anywhere online”. Or it would be even stronger if you can make the claim that they haven’t been addressed anywhere period, including the academic literature.
Just to add on to this point here, I have also now skimmed the sequence bases on your recommended starting points as well, but it did not connect yet with me much.
My admittedly very ad hoc analysis would be that some parts seem already well integrated in my mind and the overall scope doesn’t see to catch on to something fundamental I think about.
I wonder what the big takeaways are supposed to be.
..
I think LW in general suffers a bit from this: Not laying out clearly and explicitly in advance where the reader is supposed to end up.
Case in point this post: Low effort, left conclusions somewhat open because I thought it quite straightforward where I am at. Half the people reading misunderstand it, incl. my position. Compare to my super high effort posts with meticulous edits, which nobody even reads. LW style does not favor clarity. Never has, afaik.
Are you putting yourself in my shoes enough when you give me the advice that I should go back to the state of uncertainty?
Do you want to take a shot at doing this yourself? E.g., imagine that you’re trying to convince someone else with strong object-level intuitions about something (that you think should have objective answers) like metaethics, but different intuitions from your own. I mean not to convince them that they’re wrong, with object-level arguments (which isn’t likely to work since they have very different intuitions), but that they should be more uncertain, based on the kind of arguments I give? How would you proceed?
Or would you not ever want to do this, because even though there is only one true answer to metaethics for everyone, everyone should be pretty certain in their own answers despite their answers being different?
Or do you think metaethics actually doesn’t have objective answers, that the actual truth about the nature of morality/values is different for each person?
(The last 2 possibilities seem implausible to me, but I bring them up in case I’m missing something, and you do endorse one of them.)
It might just be how my brain is calibrated: Even from a young age my interest in philosophy (and ethics in particular) was off the charts. I’m not sure I could have summoned that same energy and focus if I went into it with an attitude of “I should remain uncertain about most of it.” (In theory someone could hyperfocus on unearthing interesting philosophy considerations without actually committing to any of them—they could write entire books from a mode of “this is exploring the consequences of this one assumption which I’m genuinely uncertain about,” and then write other books on the other assumptions. But with the practical stakes feeling lower and without the mental payoff of coming to conclusions, it’s lot less motivating, certainly for me/my brain, but maybe even for humans in general?)
I don’t want to be uncertain about whether I should reason in a way that seems nonsensical to me. The choice between “irreducible normativity makes sense as a concept” and “irreducible normativity does not make sense as a concept” seems really clear to me. You might say that I’m going against many very smart people in EA/rationality, but I don’t think that’s true. On LW, belief in irreducible normativity is uncommon and arguably even somewhat frown upon. Among EAs, it’s admittedly more commmon, at least in the sense that there’s a common attitude of “we should place some probability on irreducible normativity because smart people take it seriously.” However, if you then ask if there are any EAs who take irreducible normativity seriously for object-level reasons and not out of deference, you’ll hardly find anyone! (Isn’t that a weird state of affairs?!) (Lastly on this, EAs often mention Derek Parfit as someone worth deferring to, but they did so before Volume III of On What Matters was even out, and before that Parfit had written comparatively little on metaethics, so that was again indirectly deferring rather than object-level-informed deferring.)
Opportunity costs; waiting/deferring is not cost-free: Different takes on “how to do reasoning” have different downstream effects on things like the values I would adopt, and those values come with opportunity costs. (I say more on this further down in this comment.)
If it is as you describe, and if the area of philosophy in question didn’t already seem clear to me, then that would indeed convince me in favor of waiting/deferring.
However, my disagreement is that there are a bunch of areas of philosophy (by no means all of them—I have a really poor grasp of decision theory and anthropics and infinite ethics!) that already seem clear to me and it’s hard to conceive of how things could get muddled again (or become clear in a different form).
Also, I can turn the question around on you:
What if the IQ 200 version of yourself sees more considerations, but overall their picture is still disorienting? What if their subjective sense of “oh no, so many considerations, what if I pick the wrong way of going about it?” never goes away? What about the opportunity costs, then?
You might say, “So what, it was worth waiting in case it gave me more clarity; nothing of much value was lost in the meantime, since, in the meantime, we can just focus on robustly beneficial interventions under uncertainty.”
I have sympathies for that but I think it’s easier said than done. Sure, we should all move towards a world where EA would focus on outlining a joint package of interventions that captures low-hanging fruit from all plausible value angles, suffering-focused ethics as well (here’s my take). We’d discourage too much emphasis on personal values and instead encourage a framing of “all these things are important according to plausible value systems, let’s allocate talent according to individual comparative advantages.”
If you think I’m against that sort of approach, you’re wrong. I’ve written this introduction to multiverse-wide cooperation and I have myself worked on things like arguing for better compute governance and temporarily pausing AI, even though I think those things are kind of unimportant from an s-risk perspective. (It’s a bit more confounded now because I got married and feel more invested in my own life for that reason, but still...)
The reason I still talk a lot about my own values is because:
(1) I’m reacting to what I perceive to be other people unfairly dismissing my values (based on, e.g., metaethical assumptions I don’t share);
(2) I’m reacting to other people spreading metaethical takes that indirectly undermine my values (e.g., saying things like we should update to the values of smart EAs, who happen to have founder effects around certain values, or making the “option value argument” in a very misleading way;
(3) because I think unless some people are viscerally invested in doing the thing that is best according to some specific values, it’s quite likely that the “portfolio-based,” cooperative approach of “let’s together make sure that all the time-sensitive and important EA stuff gets done” will predictably miss specific interventions, particulary ones that don’t turn out to be highly regarded in the community, or are otherwise difficult or taxing to think about). (I’ve written here about ways in which people interested in multiverse-wide cooperation might fail to do it well for reasons related to this point.)
Overall, I don’t think I (and others) would have come up with nearly the same number of important considerations if I had been much more uncertain about my values and ways of thinking, and I think EA would be epistemically worse off if it had been otherwise.
The way I see it, this proves that there were opportunity costs compared to the counterfactual where I had just stayed uncertain.
Maybe that’s a human limitation, maybe there are beings who can be arbitrarily intellectually productive and focused on what matters most given their exact uncertainty distribution over plausible values, that they get up in the morning and do ambitious and targeted things (and make sacrifices) even if they don’t know much about exactly what their motivation ultimately points to. In my case at least, I would have been less motivated to make use of any potential that I had if I hadn’t known that the reason I got up in the morning was to reduce suffering.
One thing I’ve been wondering:
When you discuss these things with me, are you taking care to imagine yourself as someone who has strong object-level intuitions and attachments about what to value? I feel like, if you don’t do that, then you’ll continue to find my position puzzling in the same way John Wentworth found other people’s attitudes about romantic relationships puzzling, before he figured out he lacked a functioning oxytocin receptor. Maybe we just have different psychologies? I’m not trying to get you to adopt a specific theory of population ethics if you don’t already feel like doing that. But you’re trying to get me to go back to a state of uncertainty, even though, to me, that feels wrong. Are you putting yourself in my shoes enough when you give me the advice that I should go back to the state of uncertainty?
One of the most important points I make in the last post in my sequence is that forming convictions may not feel like a careful choice, but rather more like a discovery about who we are.
I’ll now add a bunch of quotes from various points in my sequence to illustrate what I mean by “more like a discovery about who we are”:
In my sequence’s last post I have a whole list of “Pitfalls of reflection procedures” about things that can go badly wrong, and a list on how “Reflection strategies require judgment calls,” where the meaning of “judgment call” is not that making the “wrong” decision would necessarily be catastrophic, but rather just that the outcome of our reflection might very well be heavily influenced by unavoidable early decisions that seem kind of arbitrary, and if that is the case and if we actually realize that we feel more confident about some first-order moral intuition than about which way to lean regarding the judgment calls of setting up moral reflection procedures (“how to get to IQ200” in your example), then it actually starts to seem risky and imprudent to defer to the reflection.
A few more quotes:
On why there’s not always a wager for naturalist moral realism (the wager applies only to people like you who start the process without object-level moral convictions).
--
So far, I genuinely have not gotten much object-level pushback on the most load-bearing points of my sequence, so, I’m not that worried. I do respect your reasoning a great deal, but I find it hard what to do with your advice, since you’re not proposing some concrete alternative and aren’t putting your finger on some concrete part of my framework that you think is clearly wrong—you just say I should be less certain about everything, but I wouldn’t know how to do that and it feels like I don’t want to do it.
FWIW, I would consider it relevant if people I intellectually respect were to disagree strongly and concretely with my thoughts on how to go about moral reasoning. (I agree it’s more relevant if they push back against my reasoning about values rather than just disagreeing with my specific values.)
I’m not pushing back concretely because it doesn’t seem valuable to spend my time arguing against particular philosophical positions, each of which is only held by a minority of people, with low chance of actually changing the mind of any person I engage this way (see all of the interminable philosophical debates in the past). Do you really think it would make sense for me to do this, given all of the other things I could be doing, such as trying to develop/spread general arguments that almost everyone should be less certain?
I’m not well-positioned to think about your prioritization, for all I know you’re probably prioritizing well! I didn’t mean to suggest otherwise.
And I guess you’re making the general point that I shouldn’t put too much stake into “my sequence hasn’t gotten much in terms of concrete pushback,” because it could well be that there are people who would have concrete pushback but don’t think it’s worth commenting since it’s not clear if many people other than myself would be interested. That’s fair!
(But then, probably more people than just me would be interested in a post or sequence on why moral realism is true, for reasons other than deferring, so those object-level arguments should better be put online somewhere!)
Yeah, I should have framed my reply in these terms instead of my personal prioritization. Thanks for doing the interpretive work here.
There must be a lot of academic papers posted online by philosophers who defend moral realism? For example Knowing What Matters by Richard Y Chappell (who is also in EA). There are also a couple of blog posts by people in EA:
https://www.goodthoughts.blog/p/moral-truth-without-substance
https://forum.effectivealtruism.org/posts/n5bePqoC46pGZJzqL/debate-morality-is-objective
But I haven’t read these and don’t know if they engage with your specific arguments against moral realism. If they haven’t, and you can’t find any sources that do, maybe write a post highlighting that, e.g., “Here are some strong arguments against moral realism that hasn’t been addressed anywhere online”. Or it would be even stronger if you can make the claim that they haven’t been addressed anywhere period, including the academic literature.
Just to add on to this point here, I have also now skimmed the sequence bases on your recommended starting points as well, but it did not connect yet with me much.
My admittedly very ad hoc analysis would be that some parts seem already well integrated in my mind and the overall scope doesn’t see to catch on to something fundamental I think about.
I wonder what the big takeaways are supposed to be.
..
I think LW in general suffers a bit from this: Not laying out clearly and explicitly in advance where the reader is supposed to end up.
Case in point this post: Low effort, left conclusions somewhat open because I thought it quite straightforward where I am at. Half the people reading misunderstand it, incl. my position. Compare to my super high effort posts with meticulous edits, which nobody even reads. LW style does not favor clarity. Never has, afaik.
Do you want to take a shot at doing this yourself? E.g., imagine that you’re trying to convince someone else with strong object-level intuitions about something (that you think should have objective answers) like metaethics, but different intuitions from your own. I mean not to convince them that they’re wrong, with object-level arguments (which isn’t likely to work since they have very different intuitions), but that they should be more uncertain, based on the kind of arguments I give? How would you proceed?
Or would you not ever want to do this, because even though there is only one true answer to metaethics for everyone, everyone should be pretty certain in their own answers despite their answers being different?
Or do you think metaethics actually doesn’t have objective answers, that the actual truth about the nature of morality/values is different for each person?
(The last 2 possibilities seem implausible to me, but I bring them up in case I’m missing something, and you do endorse one of them.)