Thank you for the response. This is one of maybe two or three things I’ve read from you, so the exculpatory context, even though it was trivially available and equally reasonable to infer to the presence of from the absence of specific information that would have addressed my concerns, was not part of the context in which I made my post.
It would take a much longer time to go point by point in response to your response than to focus mostly on just going back and doing a mixture of amending and clarifying my own post. Please don’t interpret this as a motte and bailey, I will be doing some updating as I respond and that will imply your criticisms in this post were correct but also, due to a mixture of limited mental energy and rhetorical incompetence that tends to cause conversations of increasing complexity to spiral away from any usefulness when I am involved in them, my priority is to offer a simple response.
I think humans in particular evolved moral faculties from the environment. These are not perfect, but I think they are tied closely enough to foundational incentives, either survival and reproduction directly, or the instincts that survival and reproduction most firmly selected for, that the possibilities are bifurcated pretty cleanly between continued moral improvement or extinction, with continued moral improvement being more likely. I think similar pressures have shaped every other species, to different degrees, with slightly different results, but that there is something like an instrumental convergence onto moralism that increases as intelligence and social complexity increase, although I don’t think absolutely every behavior is now or in the future will be subsumed under moral drives, or that the way this evolved faculty will direct behavior will by itself always impossibilize conflict between moralistic intelligences, or anything.
I was hedging, you are right. But that wasn’t meant to imply confused commitment, that was meant to imply a lack of precommitment, that either we are in your universe where the above is not true or mine where it is, and that your preferred decision making process was insufficient for either.
I don’t think that was my model of autistic people but that probably was the implication of my words so for whatever reason I said something both entirely wrong and that did not even reflect my beliefs. Intelligent autistic people regularly find intensely pro social ways of behaving that minimize contact with direct social feedback, and this rhymes in some weird phenomenological way, from an outside and maybe even inside perspective, with not having a social drive, while still being much more likely to reflect a social drive. I don’t have the appropriate rationalist vocabulary to pseudo formalize this in English. Please accept this vague gesture as being in good faith and my deepest apologies for somehow mechanically saying something that was both entirely wrong and not reflective of anything I believe.
But yes, instant cloning seems to destroy selection pressure’s possible effect on morality. The felt experience of moral obligation across generations in humans seems to correspond to a faculty for the sublime, and also to notions of acausal trade, which then spiral out into different, often abstractly incompatible feelings and thoughts, so for instance, amor fati and free will are both tightly associated with this sublime feeling, tribalism and universalism are both tightly associated with it. The core feeling embeds itself in different strategies. I don’t know that saying this speaks to anything in particular, it was just a thought I started having when I got to this paragraph.
I will stop now, this is getting less focused. Sorry. Thanks.
Thank you for the response. This is one of maybe two or three things I’ve read from you, so the exculpatory context, even though it was trivially available and equally reasonable to infer to the presence of from the absence of specific information that would have addressed my concerns, was not part of the context in which I made my post.
It would take a much longer time to go point by point in response to your response than to focus mostly on just going back and doing a mixture of amending and clarifying my own post. Please don’t interpret this as a motte and bailey, I will be doing some updating as I respond and that will imply your criticisms in this post were correct but also, due to a mixture of limited mental energy and rhetorical incompetence that tends to cause conversations of increasing complexity to spiral away from any usefulness when I am involved in them, my priority is to offer a simple response.
I think humans in particular evolved moral faculties from the environment. These are not perfect, but I think they are tied closely enough to foundational incentives, either survival and reproduction directly, or the instincts that survival and reproduction most firmly selected for, that the possibilities are bifurcated pretty cleanly between continued moral improvement or extinction, with continued moral improvement being more likely. I think similar pressures have shaped every other species, to different degrees, with slightly different results, but that there is something like an instrumental convergence onto moralism that increases as intelligence and social complexity increase, although I don’t think absolutely every behavior is now or in the future will be subsumed under moral drives, or that the way this evolved faculty will direct behavior will by itself always impossibilize conflict between moralistic intelligences, or anything.
I was hedging, you are right. But that wasn’t meant to imply confused commitment, that was meant to imply a lack of precommitment, that either we are in your universe where the above is not true or mine where it is, and that your preferred decision making process was insufficient for either.
I don’t think that was my model of autistic people but that probably was the implication of my words so for whatever reason I said something both entirely wrong and that did not even reflect my beliefs. Intelligent autistic people regularly find intensely pro social ways of behaving that minimize contact with direct social feedback, and this rhymes in some weird phenomenological way, from an outside and maybe even inside perspective, with not having a social drive, while still being much more likely to reflect a social drive. I don’t have the appropriate rationalist vocabulary to pseudo formalize this in English. Please accept this vague gesture as being in good faith and my deepest apologies for somehow mechanically saying something that was both entirely wrong and not reflective of anything I believe.
But yes, instant cloning seems to destroy selection pressure’s possible effect on morality. The felt experience of moral obligation across generations in humans seems to correspond to a faculty for the sublime, and also to notions of acausal trade, which then spiral out into different, often abstractly incompatible feelings and thoughts, so for instance, amor fati and free will are both tightly associated with this sublime feeling, tribalism and universalism are both tightly associated with it. The core feeling embeds itself in different strategies. I don’t know that saying this speaks to anything in particular, it was just a thought I started having when I got to this paragraph.
I will stop now, this is getting less focused. Sorry. Thanks.