What is Wrong?

I’ve always looked at LessWrong as a community that aims to reduce error of reasoning. However, the word “wrong” always seemed to have connotations with ethics and logic, less so goal pursuit. Something being “right” or “wrong” is generally thought of as a state of a logical proposition with respect to some logic axioms and ontological assumptions, rather than a pragmatic one.

However, it may be true that the axioms of logic that one believes are a result of one’s interests, the observations about the world. For example, if one is interested in binary-tree-like understanding, one chooses to accept the law of excluded middle. If one is interested in understanding the universe through simulation, then one may choose to accept the axioms of constructive logic. If one is interested in disproving obviously true statements, one chooses Trump logic, and so on. It is pragmatic...

So, what do we do if I we to *explore* rather than to adhere to any predefined logic axioms? In general, if one has a goal Y, one searches for logic axioms X, that would help the reasoning to achieve Y. Therefore, with respect to an agent with a goal Y, the “wrong” is any X that does not minimize one’s distance to Y, and being “Less Wrong” implies *not just* reducing cognitive or reasoning errors, but *generally* “optimizing”—not just in the domain of logical functions, or ethical functions, but in general.

The answer as to what specific domain we have to optimize to be less wrong in general, has been elusive to me, but it seems that creation of new more capable media to transmit, hold, preserve, that let evolve and flourish all of our systems, minds and cultures, is the one domain with respect to which we should consider what is wrong or right.

So, when judging something (X) to be right or wrong, we should take a look, how does this affects world’s total information content (Y).

Is AI wrong?

AI is a way to compress information by creating powerful models. Once model is built, information may be thrown away. When a social network like G+ acquires all the information that needed about the participants, then the company like Google learns it, and prunes it (closes the service). After all, if it figures out how you produce the content, you may not be valuable as information source anymore.

It may be wrong to concentrate the AI power in the hands of a few. It may be right to empower everyone to refactor (compress) their minds, and let them be more efficient agents of the society, cooperating towards maximizing world’s information content (Y).