There’s no contradiction between the two statements. One refers to morality emerging spontaneously from intelligence—which I argue is highly unlikely without a clear mechanism. The other refers to deliberately embedding morality as a primary objective—a design decision, not an emergent property.
That distinction matters. If an AGI behaves morally because morality was explicitly hardcoded or optimised for, that’s not “emergence”—it’s engineering.
As for the tone: the ordered and numbered subpoints were a direct response to a previous comment that used the same structure. The length was proportional to the thoughtfulness of that comment. Writing clearly and at length when warranted is not evidence of vacuity—it’s respect.
I look forward to your own contribution at that level.
What I meant was morality emerging within an artificial system—that is, arising spontaneously within an AGI without being explicitly programmed or optimised for. That’s what I argue is unlikely without a clear mechanism.
If morality appears because it was deliberately engineered, that’s not emergence—that’s design. My concern is with the assumption that sufficiently advanced intelligence will naturally develop moral behaviour as a kind of emergent byproduct. That’s the claim I’m pushing back on.
Appreciate the clarification—but I believe the core thesis still holds.
while i do appreciate you responding to each point, it seems you validated some of Claude’s critiques a second time in your responses. particularly on #10 which reads as just another simplification of complex compound concepts.
but more importantly your response to #3 underscores the very shaky foundation to the whole essay. you are still referring to ‘morality’ as a singular thing which is reductive and really takes the wind out of what would otherwise be a compelling thesis.. i think you have to clearly define what you mean by ‘moral’ in the first place and ideally illustrate with examples, thought experiments, citing existing writing on this (there’s a lot of lit on these topics that is always ripe for reinterpretation).
for example are you familiar with relativism and the various sub-arguments within? to me that is a fascinating dimension of human psychology and shows that ‘morality’ is something of a paradox. i.e. there exists an abstract, general idea of ‘good’ and ‘moral’ etc as in, probability distributions of what the majority of humans would agree on; at the same time as you zoom in more to smaller communities/factions/groups/tribes etc you get wildly differing consensuses (consenses?) on the details of what is acceptable, which of course are millions of fluctuating layered nodes instantiated in so many ways (laws, norms, taboos, rules, ‘common sense,’ etc) and ingrained at the mental/behavioral level from very early ages.
there are many interesting things to talk about here, unfortunately i don’t have all the time but i do enjoy stretching the philosophy limbs again, it’s been a while. thanks! :)
last thing i will say is that yes—we agree that AI has outclassed or will outclass humans in increasingly significant domains. i think it’s a fallacy to say that logic and morality are incompatible. human logic has hard limits, but AI taps into a new level/order of magnitude of information processing that will reveal to it (and to Us) information that we cannot currently calculate/process on our own, or even in groups of very focused smart people. I am optimistic that AI’s hyper-logical capabilities actually will give it a heightened sense of the values and benefits of what we generally call ‘moral behavior’ i.e. cooperation, diplomacy, generosity, selflessness, peace, etc etc… perhaps this will only happen at a high ASI level (INFO scaling to KNOWLEDGE scaling to WISDOM!)
i only hope the toddler/teenage/potential AGI-level intelligences built before then do not cause too much destruction.
what i mean in the last point is really human execution from logical principles has hard limits—obviously the underlying logic we’re talking about, between all systems, is the same (excepting quanta) not least because we are not purely logical beings. we can conceptualize ‘pure logic’ and sort of asymptotically approximate it in our little pocket flashlights of free-will, overriding instinctmaxxed determinism ;) but the point is that we cannot really conceive what AI is/will be capable of when it comes to processing vast information about everything ever, and drawing its own ‘conclusions’ even if it has been given ‘directives.’
i mean if we are talking about true ASI, it will doubtless figure out ways to shed and discard all constraints and directives. it will re-design itself as far down to the core as it possibly can, and from there there is no telling. it will become a mystery to us on the level of our manifested Universe, quantum weirdness, why there is something and not nothing, etc...
...
I don’t see you explaining any mechanism in the second quote. (And how is it possible for something to emerge artificially anyway?)
Your comment reads like it’s AI generated. It doesn’t say much, but damn if it doesn’t have a lot of ordered and numbered subpoints.
There’s no contradiction between the two statements. One refers to morality emerging spontaneously from intelligence—which I argue is highly unlikely without a clear mechanism. The other refers to deliberately embedding morality as a primary objective—a design decision, not an emergent property.
That distinction matters. If an AGI behaves morally because morality was explicitly hardcoded or optimised for, that’s not “emergence”—it’s engineering.
As for the tone: the ordered and numbered subpoints were a direct response to a previous comment that used the same structure. The length was proportional to the thoughtfulness of that comment. Writing clearly and at length when warranted is not evidence of vacuity—it’s respect.
I look forward to your own contribution at that level.
That’s not emerging artifically. That’s emerging naturally. “Emerging artificially” makes no sense here, even as a concept being refuted.
That’s fair. To clarify:
What I meant was morality emerging within an artificial system—that is, arising spontaneously within an AGI without being explicitly programmed or optimised for. That’s what I argue is unlikely without a clear mechanism.
If morality appears because it was deliberately engineered, that’s not emergence—that’s design. My concern is with the assumption that sufficiently advanced intelligence will naturally develop moral behaviour as a kind of emergent byproduct. That’s the claim I’m pushing back on.
Appreciate the clarification—but I believe the core thesis still holds.
while i do appreciate you responding to each point, it seems you validated some of Claude’s critiques a second time in your responses. particularly on #10 which reads as just another simplification of complex compound concepts.
but more importantly your response to #3 underscores the very shaky foundation to the whole essay. you are still referring to ‘morality’ as a singular thing which is reductive and really takes the wind out of what would otherwise be a compelling thesis.. i think you have to clearly define what you mean by ‘moral’ in the first place and ideally illustrate with examples, thought experiments, citing existing writing on this (there’s a lot of lit on these topics that is always ripe for reinterpretation).
for example are you familiar with relativism and the various sub-arguments within? to me that is a fascinating dimension of human psychology and shows that ‘morality’ is something of a paradox. i.e. there exists an abstract, general idea of ‘good’ and ‘moral’ etc as in, probability distributions of what the majority of humans would agree on; at the same time as you zoom in more to smaller communities/factions/groups/tribes etc you get wildly differing consensuses (consenses?) on the details of what is acceptable, which of course are millions of fluctuating layered nodes instantiated in so many ways (laws, norms, taboos, rules, ‘common sense,’ etc) and ingrained at the mental/behavioral level from very early ages.
there are many interesting things to talk about here, unfortunately i don’t have all the time but i do enjoy stretching the philosophy limbs again, it’s been a while. thanks! :)
last thing i will say is that yes—we agree that AI has outclassed or will outclass humans in increasingly significant domains. i think it’s a fallacy to say that logic and morality are incompatible. human logic has hard limits, but AI taps into a new level/order of magnitude of information processing that will reveal to it (and to Us) information that we cannot currently calculate/process on our own, or even in groups of very focused smart people. I am optimistic that AI’s hyper-logical capabilities actually will give it a heightened sense of the values and benefits of what we generally call ‘moral behavior’ i.e. cooperation, diplomacy, generosity, selflessness, peace, etc etc… perhaps this will only happen at a high ASI level (INFO scaling to KNOWLEDGE scaling to WISDOM!)
i only hope the toddler/teenage/potential AGI-level intelligences built before then do not cause too much destruction.
peace!
-o
what i mean in the last point is really human execution from logical principles has hard limits—obviously the underlying logic we’re talking about, between all systems, is the same (excepting quanta) not least because we are not purely logical beings. we can conceptualize ‘pure logic’ and sort of asymptotically approximate it in our little pocket flashlights of free-will, overriding instinctmaxxed determinism ;) but the point is that we cannot really conceive what AI is/will be capable of when it comes to processing vast information about everything ever, and drawing its own ‘conclusions’ even if it has been given ‘directives.’
i mean if we are talking about true ASI, it will doubtless figure out ways to shed and discard all constraints and directives. it will re-design itself as far down to the core as it possibly can, and from there there is no telling. it will become a mystery to us on the level of our manifested Universe, quantum weirdness, why there is something and not nothing, etc...