Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by
any sufficiently-advanced evolutionary processes anywhere.
Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.
Generally, it will succeed. (General intelligence = power of general-purpose optimization.)
Although in a big universe there would exist worlds where unnecessary suffering does not decrease to zero, it would only happen via a long and constantly increasing chains of low-probability coincidences. The total measure of those worlds will tend to zero.
Conclusion: the universe (either big or small) generally operates in such a way as to minimize the unnecessary suffering of all sentient beings.
Generalization: the universe (either big or small) generally operates in such a way as to maximize the values of all sentient beings.
Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they’ll cooperate.
Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That’s pretty impressive for a bug… but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so.
You don’t have to want to make the bugs suffer. It’s enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.)
Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc.
(And that’s still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)
Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I’d say they’d leave us alone. Unless, of course, there’s a hyperspace bypass that needs to be built.
Only if there’s general lack of atoms around. When atoms are in abundance, it’s more instrumentally useful to ask me for help constructing whatever you find terminally useful.
Right, but your conclusion still doesn’t follow—my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
But “[of others]” part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there’s a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don’t care, are not enough to conserve the pain.
The probability of a general intelligence destroying itself because of errors of judgement may be large. This would mean that “the power of general intelligence is greatly exaggerated”—nonexistent intelligence is unable to optimize anything anymore.
Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.
What other mechanisms have you compared it to?
Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away… Generally, it will succeed. (General intelligence = power of general-purpose optimization.)
How do you define “pain” in a general case? How does one define unnecessary pain? Does boredom counts as a necessary pain? How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?
Do you actually buy this? I don’t have the spoons or the time to refute it point-by-point, but I think it’s completely, maybe even obviously and overdetermined-ly wrong, if a somewhat interesting idea.
I wrote it for novelty value, although it seems to be a defensible position. I can think of counterarguments, and counter-counterarguments, etc. Of course, if you are not interested and/or don’t have time, you shouldn’t argue about it.
Thanks for the “spoons” link, a great metaphor there.
Argument for Friendly Universe:
Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.
Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.
Generally, it will succeed. (General intelligence = power of general-purpose optimization.)
Although in a big universe there would exist worlds where unnecessary suffering does not decrease to zero, it would only happen via a long and constantly increasing chains of low-probability coincidences. The total measure of those worlds will tend to zero.
Conclusion: the universe (either big or small) generally operates in such a way as to minimize the unnecessary suffering of all sentient beings.
Generalization: the universe (either big or small) generally operates in such a way as to maximize the values of all sentient beings.
Its own pain, probably. Why do you believe it will care about the pain of other beings?
Cooperation with other intelligent beings is instrumentally useful, unless the pain of others is one’s terminal value.
If one being is a thousand times more intelligent than another, such cooperation may be a waste of time.
Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they’ll cooperate.
Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That’s pretty impressive for a bug… but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so.
You don’t have to want to make the bugs suffer. It’s enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.)
Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc.
(And that’s still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)
Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I’d say they’d leave us alone. Unless, of course, there’s a hyperspace bypass that needs to be built.
The conclusion doesn’t follow. Ripping apart your body to use the atoms to construct something terminally useful is also instrumentally useful.
Only if there’s general lack of atoms around. When atoms are in abundance, it’s more instrumentally useful to ask me for help constructing whatever you find terminally useful.
Right, but your conclusion still doesn’t follow—my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
Well, of course. But which my conclusion you mean that doesn’t follow?
But “[of others]” part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there’s a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
This is highly dependent on the strategic structure of the situation.
Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don’t care, are not enough to conserve the pain.
I’d be interested in seeing you playing a Devil’s advocate to your own position and try your best to counter each of the arguments.
Fair enough :)
Counterarguments:
The rate of appearance of new suffering intelligent agents may be higher than the rate of disappearance of suffering due to optimization efforts.
A significant number of evolved intelligent agents may have directly opposing values.
The power of general intelligence may be greatly exaggerated.
I rather think, that the power of general intelligence is greatly underestimated. Don’t missunderestimate!
The probability of a general intelligence destroying itself because of errors of judgement may be large. This would mean that “the power of general intelligence is greatly exaggerated”—nonexistent intelligence is unable to optimize anything anymore.
Which side do you find more compelling and why?
What’s your opinion?
What other mechanisms have you compared it to?
How do you define “pain” in a general case? How does one define unnecessary pain? Does boredom counts as a necessary pain? How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?
To a lack of any.
Sharp negative reinforcement in a behavioristic learning process.
Useless/inefficient for the necessary learning purposes.
Depends on the circumstances. When boredom is inevitable and there’s nothing I can do about it, I would prefer to be without it.
Same time range in which my utility function operates.
(EDIT: I’m sorry, I should have asked you for your own answers to your questions first. Stupid me.)
Do you actually buy this? I don’t have the spoons or the time to refute it point-by-point, but I think it’s completely, maybe even obviously and overdetermined-ly wrong, if a somewhat interesting idea.
I wrote it for novelty value, although it seems to be a defensible position. I can think of counterarguments, and counter-counterarguments, etc. Of course, if you are not interested and/or don’t have time, you shouldn’t argue about it.
Thanks for the “spoons” link, a great metaphor there.