Every dollar and every unit of political capital spent on Cthulhu-proofing is a dollar and a unit of political capital not spent on things that yield competitive advantage right now. Every leader who commits their country’s resources to the Cthulhu project is outcompeted by a leader who instead commits those resources to economic growth, military strength, or popular welfare programs. Every researcher who works on Cthulhu defense could be working on something that produces more papers, grants, or products in their lifetime. The people who take Cthulhu seriously are, at every level of the competition, at a disadvantage relative to those who don’t, or who mouth the right words about taking it seriously while allocating resources elsewhere.
won’t a society that reasons this way get “outcompeted” by one that makes better decisions, in the sense that the former society ends up eaten by fish people (or whatever the fate)?
as long as there is an era where the threats are local, selection should have enough feedback to teach the lesson.
There aren’t enough different societies for selection to have much effect at this level. This is the evolutionary theory of group selection, which I think seems to be mostly false.
Partial disagree. There are absolutely intrinsically high-trust and intrinsically low-trust societies, and we have seen this WRT global issues like the environment. In some places, things like littering and dumping in rivers is “just what’s done”, and in others it is “just not done”, despite every nation having access to the same information about how bad it is to pollute the water supply. Group selection kind-of-works for humans because human groups can police their own very effectively over many generations. Most high trust societies today have a long history of executing a decent share of the population for crime and dishonorable behavior every generation.
That said, AI is low-salience for most people, and I think a substantial share of the people that care believe that descriptions of a threat are overblown. Among the remainder, you generally see programmers, engineers, politicians, and military planners rather than ordinary people, and those groups are much more inclined towards logical game-theoretic arguments than moral ones, even if the rest of the population leans the other way, simply because they either start their problem-solving process by mathing it out (programmers, engineers) or because they got where they are by being pragmatists (politicians, military planners).
this has nothing to do with group-level gene selection: the learnings can be entirely cultural. i’m not arguing that we are genetically predisposed to consider tail risks, rather that existing societies have faced some pressure to [create cultural machinery that effectively aligns their constituents to] care about tail risks.
i don’t expect that many societies would be needed for this, as horizontal meme transfer is easy, and cultures can learn post-mortem from their missing neighbors. see for example the sentinelese, as an existence proof.
I see. I’d frame this more as individuals learning from past societal failures. But to what extent is this happening? The warning of Easter Island seems largely unheeded. Some individuals learn. Whether it’s enough is highly questionable. I don’t see societies putting much effort into this; I can’t even think of a single class on “why civilizations collapse”. Books exist, but that’s not much of a societal-level effort.
But I see what you mean and I agree. Individuals will see examples of this happening and be able to learn from them.
I think this is true in so far as there is selection pressure, in that such events are survivable, and in that they don’t require unified coordination of all agents to survive.
The cthulu example isn’t great, only in that the nature of the threat is pretty vague to most readers (at least to me).
A better example would be; a medieval society gets hit by a meteorite; does this cause selection pressure for medieval societies to build meteorite-proof castles? Not if it just kills everyone.
Alternatively; an early-industrial era society that notices an approaching comet might be able to coordinate to invent a redirect-rocket, or nukes, or whatever to save their society, but there is still no selection pressure since the two possible results are everyone survives, or everyone does, and if anything competitive pressure will punish anyone who spends resources on saving the world, since those resources will benefit competitors who spend zero resources on the asteroid redirect mission, just as much as the ones who spent half their GDP to survive. Unless social pressure or similar can effectively reward the heroic resource sacrificing nations, they will be putting themselves at a huge disadvantage and if GDP correlated to representation over time you would expect the selfish nations to actually be the ones that are selected for.
I get that you arguing that a society which was this bad at reasoning ing general should be outcompeted by a society that is better at reasoning, but we should expect both societies to be out competed by one that is both capable of reasoning well when it is competitively valuable, and ignoring such reasoning when it is not competitively valuable. I think might be a good description of the current United States, for example which is great at listening to academics when it is profitable and ignoring them when it is inconvenient for business interests.
this is “chicken” in the theory, right? whoever swerves first (fends off the meteor) loses (pays the cost), but if nobody swerves, then everybody loses big (suffers the impact).
i agree that the decisions here are more complex than “always immediately fund the antimeteor kickstarter” or “always freeride”. both societies should lose to ones that are better at skills like coalition building, etc.
Ya, I agree this should be true in principal; I think given more time, there might be the opportunity for some sort of “Dath Ilan” lite society to rise to the top.
won’t a society that reasons this way get “outcompeted” by one that makes better decisions, in the sense that the former society ends up eaten by fish people (or whatever the fate)?
as long as there is an era where the threats are local, selection should have enough feedback to teach the lesson.
There aren’t enough different societies for selection to have much effect at this level. This is the evolutionary theory of group selection, which I think seems to be mostly false.
Partial disagree. There are absolutely intrinsically high-trust and intrinsically low-trust societies, and we have seen this WRT global issues like the environment. In some places, things like littering and dumping in rivers is “just what’s done”, and in others it is “just not done”, despite every nation having access to the same information about how bad it is to pollute the water supply. Group selection kind-of-works for humans because human groups can police their own very effectively over many generations. Most high trust societies today have a long history of executing a decent share of the population for crime and dishonorable behavior every generation.
That said, AI is low-salience for most people, and I think a substantial share of the people that care believe that descriptions of a threat are overblown. Among the remainder, you generally see programmers, engineers, politicians, and military planners rather than ordinary people, and those groups are much more inclined towards logical game-theoretic arguments than moral ones, even if the rest of the population leans the other way, simply because they either start their problem-solving process by mathing it out (programmers, engineers) or because they got where they are by being pragmatists (politicians, military planners).
this has nothing to do with group-level gene selection: the learnings can be entirely cultural. i’m not arguing that we are genetically predisposed to consider tail risks, rather that existing societies have faced some pressure to [create cultural machinery that effectively aligns their constituents to] care about tail risks.
i don’t expect that many societies would be needed for this, as horizontal meme transfer is easy, and cultures can learn post-mortem from their missing neighbors. see for example the sentinelese, as an existence proof.
I see. I’d frame this more as individuals learning from past societal failures. But to what extent is this happening? The warning of Easter Island seems largely unheeded. Some individuals learn. Whether it’s enough is highly questionable. I don’t see societies putting much effort into this; I can’t even think of a single class on “why civilizations collapse”. Books exist, but that’s not much of a societal-level effort.
But I see what you mean and I agree. Individuals will see examples of this happening and be able to learn from them.
I think this is true in so far as there is selection pressure, in that such events are survivable, and in that they don’t require unified coordination of all agents to survive.
The cthulu example isn’t great, only in that the nature of the threat is pretty vague to most readers (at least to me).
A better example would be; a medieval society gets hit by a meteorite; does this cause selection pressure for medieval societies to build meteorite-proof castles? Not if it just kills everyone.
Alternatively; an early-industrial era society that notices an approaching comet might be able to coordinate to invent a redirect-rocket, or nukes, or whatever to save their society, but there is still no selection pressure since the two possible results are everyone survives, or everyone does, and if anything competitive pressure will punish anyone who spends resources on saving the world, since those resources will benefit competitors who spend zero resources on the asteroid redirect mission, just as much as the ones who spent half their GDP to survive. Unless social pressure or similar can effectively reward the heroic resource sacrificing nations, they will be putting themselves at a huge disadvantage and if GDP correlated to representation over time you would expect the selfish nations to actually be the ones that are selected for.
I get that you arguing that a society which was this bad at reasoning ing general should be outcompeted by a society that is better at reasoning, but we should expect both societies to be out competed by one that is both capable of reasoning well when it is competitively valuable, and ignoring such reasoning when it is not competitively valuable. I think might be a good description of the current United States, for example which is great at listening to academics when it is profitable and ignoring them when it is inconvenient for business interests.
this is “chicken” in the theory, right? whoever swerves first (fends off the meteor) loses (pays the cost), but if nobody swerves, then everybody loses big (suffers the impact).
i agree that the decisions here are more complex than “always immediately fund the antimeteor kickstarter” or “always freeride”. both societies should lose to ones that are better at skills like coalition building, etc.
Ya, I agree this should be true in principal; I think given more time, there might be the opportunity for some sort of “Dath Ilan” lite society to rise to the top.