Quantifying wisdom

So we know that many smart peo­ple make stupid (at least in ret­ro­spect) de­ci­sions. What these peo­ple seem to be lack­ing, at least at the mo­ment they make a poor de­ci­sion, is wis­dom (“ju­di­cious ap­pli­ca­tion of knowl­edge”). More from Wikipe­dia:

It is a deep un­der­stand­ing and re­al­iza­tion of peo­ple, things, events or situ­a­tions, re­sult­ing in the abil­ity to ap­ply per­cep­tions, judge­ments and ac­tions in keep­ing with this un­der­stand­ing. It of­ten re­quires con­trol of one’s emo­tional re­ac­tions (the “pas­sions”) so that uni­ver­sal prin­ci­ples, rea­son and knowl­edge pre­vail to de­ter­mine one’s ac­tions.

From Psy­chol­ogy To­day:

It can be difficult to define Wis­dom, but peo­ple gen­er­ally rec­og­nize it when they en­counter it. Psy­chol­o­gists pretty much agree it in­volves an in­te­gra­tion of knowl­edge, ex­pe­rience, and deep un­der­stand­ing that in­cor­po­rates tol­er­ance for the un­cer­tain­ties of life as well as its ups and downs. There’s an aware­ness of how things play out over time, and it con­fers a sense of bal­ance.

Wise peo­ple gen­er­ally share an op­ti­mism that life’s prob­lems can be solved and ex­pe­rience a cer­tain amount of calm in fac­ing difficult de­ci­sions. In­tel­li­gence—if only any­one could figure out ex­actly what it is—may be nec­es­sary for wis­dom, but it definitely isn’t suffi­cient; an abil­ity to see the big pic­ture, a sense of pro­por­tion, and con­sid­er­able in­tro­spec­tion also con­tribute to its de­vel­op­ment.

From SEP:

(1) wis­dom as epistemic hu­mil­ity, (2) wis­dom as epistemic ac­cu­racy, (3) wis­dom as knowl­edge, and (4) wis­dom as knowl­edge and ac­tion.

Clearly, if one cre­ated a hu­man-level AI, one would want it to “choose wisely”. How­ever, as hu­man ex­am­ples show, wis­dom does not come for free with in­tel­li­gence. Ac­tu­ally, we usu­ally don’t trust in­tel­li­gent peo­ple nearly as much as we trust wise ones (or ap­pear­ing to be wise, at any rate). We don’t trust them to make good de­ci­sions, be­cause they might be too smart for their own good. Speak­ing of ar­tifi­cial in­tel­li­gence, one (in­for­mal) qual­ity we’d ex­pect an FAI to have is that of wis­dom.

So, how would one mea­sure wis­dom? Con­vert­ing the above de­scrip­tion (“abil­ity to ap­ply per­cep­tions, judge­ments and ac­tions in keep­ing with this un­der­stand­ing”) into a more tech­ni­cal form, one can in­ter­pret wis­dom, in part, as un­der­stand­ing one’s own limi­ta­tions (“run­ning on cor­rupt hard­ware”, in the lo­cal par­lance) and cal­ibrat­ing one’s ac­tions ac­cord­ingly. For ex­am­ple, of two peo­ple of the same knowl­edge and in­tel­li­gence level (as de­ter­mined by your fa­vorite in­tel­li­gence test), how do you tell which one is wiser? You look at how the out­comes of their ac­tions mea­sure up against what they pre­dicted. The good news is that you can prac­tice and test your cal­ibra­tion (and, by ex­ten­sion, your wis­dom), by play­ing with the Pre­dic­tionBook.

For ex­am­ple, Aaron Swartz was clearly very smart, but was it wise of him to act they way he did, gam­bling on one big thing af­ter an­other, with­out a clear sense of what is likely to hap­pen and at what odds? On the other end of the spec­trum, you can of­ten see wise peo­ple of av­er­age in­tel­li­gence (or lower) rec­og­niz­ing their limi­ta­tions and stick­ing with “what works”.

Now, this quan­tifi­ca­tion is clearly not ex­haus­tive. Even when perfectly cal­ibrated, how do you quan­tify be­ing ap­pro­pri­ately cau­tious when mak­ing dras­tic choices and ap­pro­pri­ately bold when mak­ing minor ones? What al­gorithms/​de­ci­sion the­o­ries make some­one wiser? Bayesi­anism can surely help, but it re­lies on de­cent pri­ors and does not com­pel one to act. Would some­one im­ple­ment­ing TDT or UDT to the best of their abil­ity max­i­mize their wis­dom for a given in­tel­li­gence/​knowl­edge level? Is this even a mean­ingful ques­tion to ask?

EDIT: fixed fonts (hope­fully).