No write-up. The idea is that you can decide between two situations by choosing the one with greater information or complexity. The trickiness is in deciding how to measure information or complexity, and in deciding what to measure the complexity of. You probably don’t want to conclude that, in a closed system, the ethically best thing to do is nothing because doing anything increases entropy. (Perhaps using a measure of computation performed, instead of a static measure of entropy, would address that.)
This gives you immediately a lot of ethical principles that are otherwise difficult to justify; such as valuing evolution, knowledge, diversity, and the environment; and condemning (non-selective) destruction and censorship. Also, whereas most ethical systems tend to extreme points of view, the development of complexity is greatest when control parameters take on intermediate values. Conservatives value stasis; progressives value change; those who wish to increase complexity aim for a balance between the two.
(The equation in my comment is not specific to that idea, so it may be distracting you.)
(Perhaps using a measure of computation performed, instead of a static measure of entropy, would address that.)
This is exactly what I have been thinking for a while also. In this view, when thinking about how bad it would be to destroy something, one should think about how much computation it would take to recreate it. I think this seems like a really promising idea, because it gives a unified reason to be against both murder and destruction of the rain forests.
Still, it is probably not enough to consider only the amount of computation—one could come up with counterexamples of programs computing really boring things...
This parallels some of the work I’m doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don’t want to use a classical information measure—otherwise Kolmogorov random outcomes will be preferred to all others.
No write-up. The idea is that you can decide between two situations by choosing the one with greater information or complexity. The trickiness is in deciding how to measure information or complexity, and in deciding what to measure the complexity of. You probably don’t want to conclude that, in a closed system, the ethically best thing to do is nothing because doing anything increases entropy. (Perhaps using a measure of computation performed, instead of a static measure of entropy, would address that.)
This gives you immediately a lot of ethical principles that are otherwise difficult to justify; such as valuing evolution, knowledge, diversity, and the environment; and condemning (non-selective) destruction and censorship. Also, whereas most ethical systems tend to extreme points of view, the development of complexity is greatest when control parameters take on intermediate values. Conservatives value stasis; progressives value change; those who wish to increase complexity aim for a balance between the two.
(The equation in my comment is not specific to that idea, so it may be distracting you.)
This is exactly what I have been thinking for a while also. In this view, when thinking about how bad it would be to destroy something, one should think about how much computation it would take to recreate it. I think this seems like a really promising idea, because it gives a unified reason to be against both murder and destruction of the rain forests.
Still, it is probably not enough to consider only the amount of computation—one could come up with counterexamples of programs computing really boring things...
This parallels some of the work I’m doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don’t want to use a classical information measure—otherwise Kolmogorov random outcomes will be preferred to all others.