There aren’t enough different societies for selection to have much effect at this level. This is the evolutionary theory of group selection, which I think seems to be mostly false.
Partial disagree. There are absolutely intrinsically high-trust and intrinsically low-trust societies, and we have seen this WRT global issues like the environment. In some places, things like littering and dumping in rivers is “just what’s done”, and in others it is “just not done”, despite every nation having access to the same information about how bad it is to pollute the water supply. Group selection kind-of-works for humans because human groups can police their own very effectively over many generations. Most high trust societies today have a long history of executing a decent share of the population for crime and dishonorable behavior every generation.
That said, AI is low-salience for most people, and I think a substantial share of the people that care believe that descriptions of a threat are overblown. Among the remainder, you generally see programmers, engineers, politicians, and military planners rather than ordinary people, and those groups are much more inclined towards logical game-theoretic arguments than moral ones, even if the rest of the population leans the other way, simply because they either start their problem-solving process by mathing it out (programmers, engineers) or because they got where they are by being pragmatists (politicians, military planners).
this has nothing to do with group-level gene selection: the learnings can be entirely cultural. i’m not arguing that we are genetically predisposed to consider tail risks, rather that existing societies have faced some pressure to [create cultural machinery that effectively aligns their constituents to] care about tail risks.
i don’t expect that many societies would be needed for this, as horizontal meme transfer is easy, and cultures can learn post-mortem from their missing neighbors. see for example the sentinelese, as an existence proof.
I see. I’d frame this more as individuals learning from past societal failures. But to what extent is this happening? The warning of Easter Island seems largely unheeded. Some individuals learn. Whether it’s enough is highly questionable. I don’t see societies putting much effort into this; I can’t even think of a single class on “why civilizations collapse”. Books exist, but that’s not much of a societal-level effort.
But I see what you mean and I agree. Individuals will see examples of this happening and be able to learn from them.
There aren’t enough different societies for selection to have much effect at this level. This is the evolutionary theory of group selection, which I think seems to be mostly false.
Partial disagree. There are absolutely intrinsically high-trust and intrinsically low-trust societies, and we have seen this WRT global issues like the environment. In some places, things like littering and dumping in rivers is “just what’s done”, and in others it is “just not done”, despite every nation having access to the same information about how bad it is to pollute the water supply. Group selection kind-of-works for humans because human groups can police their own very effectively over many generations. Most high trust societies today have a long history of executing a decent share of the population for crime and dishonorable behavior every generation.
That said, AI is low-salience for most people, and I think a substantial share of the people that care believe that descriptions of a threat are overblown. Among the remainder, you generally see programmers, engineers, politicians, and military planners rather than ordinary people, and those groups are much more inclined towards logical game-theoretic arguments than moral ones, even if the rest of the population leans the other way, simply because they either start their problem-solving process by mathing it out (programmers, engineers) or because they got where they are by being pragmatists (politicians, military planners).
this has nothing to do with group-level gene selection: the learnings can be entirely cultural. i’m not arguing that we are genetically predisposed to consider tail risks, rather that existing societies have faced some pressure to [create cultural machinery that effectively aligns their constituents to] care about tail risks.
i don’t expect that many societies would be needed for this, as horizontal meme transfer is easy, and cultures can learn post-mortem from their missing neighbors. see for example the sentinelese, as an existence proof.
I see. I’d frame this more as individuals learning from past societal failures. But to what extent is this happening? The warning of Easter Island seems largely unheeded. Some individuals learn. Whether it’s enough is highly questionable. I don’t see societies putting much effort into this; I can’t even think of a single class on “why civilizations collapse”. Books exist, but that’s not much of a societal-level effort.
But I see what you mean and I agree. Individuals will see examples of this happening and be able to learn from them.