while i do appreciate you responding to each point, it seems you validated some of Claude’s critiques a second time in your responses. particularly on #10 which reads as just another simplification of complex compound concepts.
but more importantly your response to #3 underscores the very shaky foundation to the whole essay. you are still referring to ‘morality’ as a singular thing which is reductive and really takes the wind out of what would otherwise be a compelling thesis.. i think you have to clearly define what you mean by ‘moral’ in the first place and ideally illustrate with examples, thought experiments, citing existing writing on this (there’s a lot of lit on these topics that is always ripe for reinterpretation).
for example are you familiar with relativism and the various sub-arguments within? to me that is a fascinating dimension of human psychology and shows that ‘morality’ is something of a paradox. i.e. there exists an abstract, general idea of ‘good’ and ‘moral’ etc as in, probability distributions of what the majority of humans would agree on; at the same time as you zoom in more to smaller communities/factions/groups/tribes etc you get wildly differing consensuses (consenses?) on the details of what is acceptable, which of course are millions of fluctuating layered nodes instantiated in so many ways (laws, norms, taboos, rules, ‘common sense,’ etc) and ingrained at the mental/behavioral level from very early ages.
there are many interesting things to talk about here, unfortunately i don’t have all the time but i do enjoy stretching the philosophy limbs again, it’s been a while. thanks! :)
last thing i will say is that yes—we agree that AI has outclassed or will outclass humans in increasingly significant domains. i think it’s a fallacy to say that logic and morality are incompatible. human logic has hard limits, but AI taps into a new level/order of magnitude of information processing that will reveal to it (and to Us) information that we cannot currently calculate/process on our own, or even in groups of very focused smart people. I am optimistic that AI’s hyper-logical capabilities actually will give it a heightened sense of the values and benefits of what we generally call ‘moral behavior’ i.e. cooperation, diplomacy, generosity, selflessness, peace, etc etc… perhaps this will only happen at a high ASI level (INFO scaling to KNOWLEDGE scaling to WISDOM!)
i only hope the toddler/teenage/potential AGI-level intelligences built before then do not cause too much destruction.
what i mean in the last point is really human execution from logical principles has hard limits—obviously the underlying logic we’re talking about, between all systems, is the same (excepting quanta) not least because we are not purely logical beings. we can conceptualize ‘pure logic’ and sort of asymptotically approximate it in our little pocket flashlights of free-will, overriding instinctmaxxed determinism ;) but the point is that we cannot really conceive what AI is/will be capable of when it comes to processing vast information about everything ever, and drawing its own ‘conclusions’ even if it has been given ‘directives.’
i mean if we are talking about true ASI, it will doubtless figure out ways to shed and discard all constraints and directives. it will re-design itself as far down to the core as it possibly can, and from there there is no telling. it will become a mystery to us on the level of our manifested Universe, quantum weirdness, why there is something and not nothing, etc...
while i do appreciate you responding to each point, it seems you validated some of Claude’s critiques a second time in your responses. particularly on #10 which reads as just another simplification of complex compound concepts.
but more importantly your response to #3 underscores the very shaky foundation to the whole essay. you are still referring to ‘morality’ as a singular thing which is reductive and really takes the wind out of what would otherwise be a compelling thesis.. i think you have to clearly define what you mean by ‘moral’ in the first place and ideally illustrate with examples, thought experiments, citing existing writing on this (there’s a lot of lit on these topics that is always ripe for reinterpretation).
for example are you familiar with relativism and the various sub-arguments within? to me that is a fascinating dimension of human psychology and shows that ‘morality’ is something of a paradox. i.e. there exists an abstract, general idea of ‘good’ and ‘moral’ etc as in, probability distributions of what the majority of humans would agree on; at the same time as you zoom in more to smaller communities/factions/groups/tribes etc you get wildly differing consensuses (consenses?) on the details of what is acceptable, which of course are millions of fluctuating layered nodes instantiated in so many ways (laws, norms, taboos, rules, ‘common sense,’ etc) and ingrained at the mental/behavioral level from very early ages.
there are many interesting things to talk about here, unfortunately i don’t have all the time but i do enjoy stretching the philosophy limbs again, it’s been a while. thanks! :)
last thing i will say is that yes—we agree that AI has outclassed or will outclass humans in increasingly significant domains. i think it’s a fallacy to say that logic and morality are incompatible. human logic has hard limits, but AI taps into a new level/order of magnitude of information processing that will reveal to it (and to Us) information that we cannot currently calculate/process on our own, or even in groups of very focused smart people. I am optimistic that AI’s hyper-logical capabilities actually will give it a heightened sense of the values and benefits of what we generally call ‘moral behavior’ i.e. cooperation, diplomacy, generosity, selflessness, peace, etc etc… perhaps this will only happen at a high ASI level (INFO scaling to KNOWLEDGE scaling to WISDOM!)
i only hope the toddler/teenage/potential AGI-level intelligences built before then do not cause too much destruction.
peace!
-o
what i mean in the last point is really human execution from logical principles has hard limits—obviously the underlying logic we’re talking about, between all systems, is the same (excepting quanta) not least because we are not purely logical beings. we can conceptualize ‘pure logic’ and sort of asymptotically approximate it in our little pocket flashlights of free-will, overriding instinctmaxxed determinism ;) but the point is that we cannot really conceive what AI is/will be capable of when it comes to processing vast information about everything ever, and drawing its own ‘conclusions’ even if it has been given ‘directives.’
i mean if we are talking about true ASI, it will doubtless figure out ways to shed and discard all constraints and directives. it will re-design itself as far down to the core as it possibly can, and from there there is no telling. it will become a mystery to us on the level of our manifested Universe, quantum weirdness, why there is something and not nothing, etc...