I have a hard time thinking about this stuff completely disconnected from moral implications, because there are a lot of people in and out of academia who want to take any opportunity to use this modeling to further their agenda; even improving models of this stuff gets used as propaganda, and in doing so, groups make understanding the models unpalatable to people they disagree with—which reduces insight input from people who become allergic to the topic. I almost didn’t comment because of this, and I have a hunch that it’s at least part of why the post didn’t get more upvotes; the current ambient propaganda feeling around the concept of darwinism outside of academia is vaguely the implication that if you’re talking about the concept, you think it’s good.
I feel like we need to understand biological and civilizational agency—what it means for a thing to seek an outcome outside itself—in order to make high quality scientifically grounded claims about the relationship between morality and darwinism. I do think you’re barking up vaguely the right tree, but I think we can do better than the might-makes-right perspective this seems like it could be twisted into implying: “so you’re telling me that if we kill everyone who we don’t like, that means our values are objectively good?”—if anything, the place values darwinism would be interesting to investigate is the long history of values selection up until humanity went civilization-scale a mere 12k years ago. and our lack of detailed evidence about the values of people 12k years ago and earlier makes that rather a problem.
Somewhat tangentially, on that note, among a few other papers I was browsing a while ago, I really like this take on the large cost of information acquisition from death, because it seems to me to imply that learning mechanisms that do not cause death are likely more effective at picking up adaptive traits.
Re: “so you’re telling me that if we kill everyone who we don’t like, that means our values are objectively good?”—winners write history, so I think yes, that is how people view Darwinism, selection of values, and I think implicitly our values are derived from this thinking (though no-one will ever admit to this). The modern values of tolerance I think still come from this same thinking—just with the additional understanding that diverse societies tend to win-out over homogeneous societies. So we transition from individual Darwinism, to group Darwinism—but still keep Darwinism as our way to arrive at values.
Adding memetic Darwinism on top of this may qualitatively change the landscape, I believe.
Thanks for those references—definitely an interesting way to quantitatively study these things, will look in more detail.
I have a hard time thinking about this stuff completely disconnected from moral implications, because there are a lot of people in and out of academia who want to take any opportunity to use this modeling to further their agenda; even improving models of this stuff gets used as propaganda, and in doing so, groups make understanding the models unpalatable to people they disagree with—which reduces insight input from people who become allergic to the topic. I almost didn’t comment because of this, and I have a hunch that it’s at least part of why the post didn’t get more upvotes; the current ambient propaganda feeling around the concept of darwinism outside of academia is vaguely the implication that if you’re talking about the concept, you think it’s good.
I feel like we need to understand biological and civilizational agency—what it means for a thing to seek an outcome outside itself—in order to make high quality scientifically grounded claims about the relationship between morality and darwinism. I do think you’re barking up vaguely the right tree, but I think we can do better than the might-makes-right perspective this seems like it could be twisted into implying: “so you’re telling me that if we kill everyone who we don’t like, that means our values are objectively good?”—if anything, the place values darwinism would be interesting to investigate is the long history of values selection up until humanity went civilization-scale a mere 12k years ago. and our lack of detailed evidence about the values of people 12k years ago and earlier makes that rather a problem.
Somewhat tangentially, on that note, among a few other papers I was browsing a while ago, I really like this take on the large cost of information acquisition from death, because it seems to me to imply that learning mechanisms that do not cause death are likely more effective at picking up adaptive traits.
Re: “so you’re telling me that if we kill everyone who we don’t like, that means our values are objectively good?”—winners write history, so I think yes, that is how people view Darwinism, selection of values, and I think implicitly our values are derived from this thinking (though no-one will ever admit to this). The modern values of tolerance I think still come from this same thinking—just with the additional understanding that diverse societies tend to win-out over homogeneous societies. So we transition from individual Darwinism, to group Darwinism—but still keep Darwinism as our way to arrive at values.
Adding memetic Darwinism on top of this may qualitatively change the landscape, I believe.
Thanks for those references—definitely an interesting way to quantitatively study these things, will look in more detail.