[Question] What determines the balance between intelligence signaling and virtue signaling?

Lately I’ve come to think of hu­man civ­i­liza­tion as largely built on the backs of in­tel­li­gence and virtue sig­nal­ing. In other words, civ­i­liza­tion de­pends very much on the pos­i­tive side effects of (not nec­es­sar­ily con­scious) in­tel­li­gence and virtue sig­nal­ing, as chan­neled by var­i­ous in­sti­tu­tions. As evolu­tion­ary psy­chol­o­gist Ge­offrey Miller says, “it’s all sig­nal­ing all the way down.”

A ques­tion I’m try­ing to figure out now is, what de­ter­mines the rel­a­tive pro­por­tions of in­tel­li­gence vs virtue sig­nal­ing? (Miller ar­gued that in­tel­li­gence sig­nal­ing can be con­sid­ered a kind of virtue sig­nal­ing, but that seems de­bat­able to me, and in any case, for ease of dis­cus­sion I’ll use “virtue sig­nal­ing” to mean “other kinds of virtue sig­nal­ing be­sides in­tel­li­gence sig­nal­ing”.) It seems that if you get too much of one type of sig­nal­ing ver­sus the other, things can go hor­ribly wrong (the link is to Gw­ern’s awe­some re­view/​sum­mary of a book about the Cul­tural Revolu­tion). We’re see­ing this more and more in Western so­cieties, in places like jour­nal­ism, academia, gov­ern­ment, ed­u­ca­tion, and even busi­ness. But what’s caus­ing this?

One the­ory is that Twit­ter with its char­ac­ter limit, and so­cial me­dia and shorter at­ten­tion spans in gen­eral, have made it much eas­ier to do virtue sig­nal­ing rel­a­tive to in­tel­li­gence sig­nal­ing. But this seems too sim­plis­tic and there has to be more to it, even if it is part of the ex­pla­na­tion.

Another idea is that in­tel­li­gence is val­ued more when a so­ciety feels threat­ened by an out­side force, for which they need com­pe­tent peo­ple to pro­tect them­selves from. US policy changes af­ter Sput­nik is a good ex­am­ple of this. This may also ex­plain why in­tel­li­gence sig­nal­ing con­tinues to dom­i­nate or at least is not dom­i­nated by virtue sig­nal­ing in the ra­tio­nal­ist and EA com­mu­ni­ties (i.e., we’re re­ally wor­ried about the threat from Un­friendly AI).

Does any­one have other ideas, or have seen more sys­tem­atic re­search into this ques­tion?

Once we un­der­stand the above, here are some fol­lowup ques­tions: Is the trend to­wards more virtue sig­nal­ing at the ex­pense of in­tel­li­gence sig­nal­ing likely to re­verse it­self? How bad can things get, re­al­is­ti­cally, if it doesn’t? Is there any­thing we can or should do about the prob­lem? How can we at least pro­tect our own com­mu­ni­ties from run­away virtue sig­nal­ing? (The re­cent calls against ap­peals to con­se­quences make more sense to me now, given this fram­ing, but I still think they may err too much in the other di­rec­tion.)

PS, it was in­ter­est­ing to read this in Miller’s lat­est book Virtue Sig­nal­ing:

Where does the term ‘virtue sig­nal­ing’ come from? Some say it goes back to 2015, when Bri­tish jour­nal­ist/​au­thor James Bartholomew wrote a brilli­ant piece for The Spec­ta­tor called ‘The awful rise of ‘virtue sig­nal­ing.’’ Some say it goes back to the Ra­tion­al­ist blog ‘LessWrong,’ which was us­ing the term at least as far back as 2013. Even be­fore that, many folks in the Ra­tion­al­ist and Effec­tive Altru­ism sub­cul­tures were aware of how sig­nal­ing the­ory ex­plains a lot of ide­olog­i­cal be­hav­ior, and how sig­nal­ing can un­der­mine the ra­tio­nal­ity of poli­ti­cal dis­cus­sion.

I didn’t know that “virtue sig­nal­ing” was first coined (or at least used in writ­ing) on LessWrong. Un­for­tu­nately, from a search, it doesn’t seem like there was sub­stan­tial dis­cus­sion around this term. Sig­nal­ing in gen­eral was much dis­cussed on LessWrong and Over­com­ingBias, but I find my­self still up­dat­ing to­wards it be­ing more im­por­tant than I had re­al­ized.