What is Wrong?

I’ve always looked at LessWrong as a com­mu­nity that aims to re­duce er­ror of rea­son­ing. How­ever, the word “wrong” always seemed to have con­no­ta­tions with ethics and logic, less so goal pur­suit. Some­thing be­ing “right” or “wrong” is gen­er­ally thought of as a state of a log­i­cal propo­si­tion with re­spect to some logic ax­ioms and on­tolog­i­cal as­sump­tions, rather than a prag­matic one.

How­ever, it may be true that the ax­ioms of logic that one be­lieves are a re­sult of one’s in­ter­ests, the ob­ser­va­tions about the world. For ex­am­ple, if one is in­ter­ested in bi­nary-tree-like un­der­stand­ing, one chooses to ac­cept the law of ex­cluded mid­dle. If one is in­ter­ested in un­der­stand­ing the uni­verse through simu­la­tion, then one may choose to ac­cept the ax­ioms of con­struc­tive logic. If one is in­ter­ested in dis­prov­ing ob­vi­ously true state­ments, one chooses Trump logic, and so on. It is prag­matic...

So, what do we do if I we to *ex­plore* rather than to ad­here to any pre­defined logic ax­ioms? In gen­eral, if one has a goal Y, one searches for logic ax­ioms X, that would help the rea­son­ing to achieve Y. There­fore, with re­spect to an agent with a goal Y, the “wrong” is any X that does not min­i­mize one’s dis­tance to Y, and be­ing “Less Wrong” im­plies *not just* re­duc­ing cog­ni­tive or rea­son­ing er­rors, but *gen­er­ally* “op­ti­miz­ing”—not just in the do­main of log­i­cal func­tions, or eth­i­cal func­tions, but in gen­eral.

The an­swer as to what spe­cific do­main we have to op­ti­mize to be less wrong in gen­eral, has been elu­sive to me, but it seems that cre­ation of new more ca­pa­ble me­dia to trans­mit, hold, pre­serve, that let evolve and flour­ish all of our sys­tems, minds and cul­tures, is the one do­main with re­spect to which we should con­sider what is wrong or right.

So, when judg­ing some­thing (X) to be right or wrong, we should take a look, how does this af­fects world’s to­tal in­for­ma­tion con­tent (Y).

Is AI wrong?

AI is a way to com­press in­for­ma­tion by cre­at­ing pow­er­ful mod­els. Once model is built, in­for­ma­tion may be thrown away. When a so­cial net­work like G+ ac­quires all the in­for­ma­tion that needed about the par­ti­ci­pants, then the com­pany like Google learns it, and prunes it (closes the ser­vice). After all, if it figures out how you pro­duce the con­tent, you may not be valuable as in­for­ma­tion source any­more.

It may be wrong to con­cen­trate the AI power in the hands of a few. It may be right to em­power ev­ery­one to re­fac­tor (com­press) their minds, and let them be more effi­cient agents of the so­ciety, co­op­er­at­ing to­wards max­i­miz­ing world’s in­for­ma­tion con­tent (Y).