“rather there’s a tendency to assume that complexity of value must lead to complexity of outcome”
The main problem I see here is the other way around:
There’s a tendency to assume that complexity of outcome must have been produced by complexity of value.
AFAICS, it is only members of this community that think this way. Noboby else seems to have a problem with the idea of goals that can be concisely expressed—like: “trying to have as many offspring as possible”—leading to immense diversity and complexity.
This is a facet of an even more basic principle—that extremely simple rules can produce extremely complex outcomes—e.g. see the r-pentomino in the game of life.
You can see it clearly in the case of simple goals like “winning games of go”. The simple goal leads to an explosion of complexity—in the form of the resulting go-playing programs.
Are you talking about Kolmogorov complexity or something else? Because the outcome which optimizes a simple goal would have a low Kolmogorov complexity.
Filling the universe with orgasmium involves interstellar and intergalactic travel, stellar farming, molecular nanotechnology, coordinating stars to leap between galaxies, mastering nuclear fusion, conquering any other civilisations it might meet along the way—and many other complexity-requiring activities.
Tim, you seem to be failing to distinguish between complex in the technical sense, and complex-looking. Remember that the mandelbrot set is simple, not complex in the technical sense.
Well, if you had a utility function over a finite set of possible outcomes, then you can run a computer program to check every outcome and pick the one with the highest utility. So the complexity of that outcome is bounded by the complexity of the set of possible outcomes plus the complexity of the utility function plus a constant.
EDIT: And none of those things you mentioned require a lot of complexity.
If the things I mentioned are so simple, perhaps you could explain how to do them?
I would be especially interested in a “simple” method of conquering any other civilisations which we might meet—so perhaps you might like to concentrate on that?
This contradicts my understanding of AIXI from Shane Legg’s Extrobritannia presentation. What’s the variable bit? Not the utility function; that’s effectively external and after the fact, and AIXI infers it.
No. AIXItl will need to have other complexity—if you want it to work in a reasonable quantity of time—e.g. see, for example:
“Elimination of the factor 2˜l without giving up universality will probably be a very difficult task. One could try to select programs p and prove VA(p) in a more clever way than by mere enumeration. All kinds of ideas like, heuristic search, genetic algorithms, advanced theorem provers, and many more could be incorporated.”″
It seems that you think “complex” means “difficult.” It doesn’t. Complex means “requires a lot of information to specify.” There are no simple problems with complex solutions, because any specification of a problem is also a specification of its solution. This is the point of my original post.
So: a galaxy-conquering civilisation has low Kolmogorov complexity—because it has a short description—namely “a galaxy-conquering civilisation”???
If you actually attempted to describe a real galaxy-conquering civilisation, it would take a lot of bits to specify which one you were looking at—because the method of getting there will necessarily have involved time-and-space constraints.
Those bits will have come from the galaxy—which is large and contains lots of information.
More abstractly, “Find a root of y = sin(x)” is a simple problem with many K-complex solutions. Simple problems really can have K-complex solutions.
A particular galaxy-conquering civilization might have high Kolmogorov complexity, but if you can phrase the request “find me a galaxy-conquering civilization” using a small number of bits, and if galaxy-conquering civilizations exist, then there is a solution with low Kolmogorov complexity.
Hmm, okay. I should not have said “there are no simple problems with complex solutions.” Rather, there are no simple problems whose only solutions are complex. Are we in agreement?
Gah, don’t over-qualify jokes! It’s a supplicating behavior and seeking permission to be funny blunts the effect. Just throw the “X^2 = −1” out there (which is a good one by the way) and then go on to say “A more serious counterexample”. That’s more than enough for people to ‘get it’ and anyone who doesn’t will just look silly.
This is the Right (Wedrifid-Laughter-Maximising) thing to do.
How about: what is the smallest number that can’t be described by an English sentence of less than ten thousand words? ;-)
Of course, knowing that a K-simple solution existed in the form of the problem specification would not help very much in constructing/implementing it.
This seems to have led a slew of people to conclude that simple values lead to simple outcomes. You yourself suggest that the simple value of “filling the universe with orgasmium” is one whose outcome would mean that “the future of the universe will turn out to be rather simple”.
Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity—in addition to lots of orgasmium.
Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity
...but not in the least convenient possible world with an ontologically simple turn-everything-into-orgasmium button; and the sort of complexity that you mention that (I agree) would be involved in the actual world isn’t a sort that most people regard as terminally valuable.
Here we were talking about a superintelligent agent whose “fondest desire is to fill the universe with orgasmium”. About the only way such an agent would fail to produce enormous complexity is if it died—or was otherwise crippled or imprisoned.
Whether humans would want to live—or would survive in—the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me—and it seems rather irrelevant to the point under discussion.
Here we were talking about a superintelligent agent whose “fondest desire is to fill the universe with orgasmium”. About the only way such an agent would fail to produce enormous complexity is if it died—or was otherwise crippled or imprisoned.
Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?
Whether humans would want to live—or would survive in—the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me—and it seems rather irrelevant to the point under discussion.
Human morality is the point under discussion, so of course it’s relevant. It seems clear that the chief kind of “complexity” that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.
I think I missed the bit where you went off into a wild and highly-improbable fantasy world.
Re: Human morality is the point under discussion
What I was discussing was the “tendency to assume that complexity of outcome must have been produced by complexity of value”. That is not specifically to do with human values.
“rather there’s a tendency to assume that complexity of value must lead to complexity of outcome”
The main problem I see here is the other way around:
There’s a tendency to assume that complexity of outcome must have been produced by complexity of value.
AFAICS, it is only members of this community that think this way. Noboby else seems to have a problem with the idea of goals that can be concisely expressed—like: “trying to have as many offspring as possible”—leading to immense diversity and complexity.
This is a facet of an even more basic principle—that extremely simple rules can produce extremely complex outcomes—e.g. see the r-pentomino in the game of life.
You can see it clearly in the case of simple goals like “winning games of go”. The simple goal leads to an explosion of complexity—in the form of the resulting go-playing programs.
Are you talking about Kolmogorov complexity or something else? Because the outcome which optimizes a simple goal would have a low Kolmogorov complexity.
Kolmogorov complexity is fine by me.
What make you say that? It isn’t right.
Filling the universe with orgasmium involves interstellar and intergalactic travel, stellar farming, molecular nanotechnology, coordinating stars to leap between galaxies, mastering nuclear fusion, conquering any other civilisations it might meet along the way—and many other complexity-requiring activities.
Tim, you seem to be failing to distinguish between complex in the technical sense, and complex-looking. Remember that the mandelbrot set is simple, not complex in the technical sense.
Indeed—sorry! The r-pentomino’s evolution is not a good example of high Kolmogorov complexity—though as you say, it is complex in other senses.
I had forgotten that I gave that as one of my examples when I retroactively assented to the use Kolmogorov complexity as a metric.
Well, if you had a utility function over a finite set of possible outcomes, then you can run a computer program to check every outcome and pick the one with the highest utility. So the complexity of that outcome is bounded by the complexity of the set of possible outcomes plus the complexity of the utility function plus a constant.
EDIT: And none of those things you mentioned require a lot of complexity.
If the things I mentioned are so simple, perhaps you could explain how to do them?
I would be especially interested in a “simple” method of conquering any other civilisations which we might meet—so perhaps you might like to concentrate on that?
Build AIXItl.
Alas, AIXItl is a whole class of things, many of which are likely to be highly complex.
This contradicts my understanding of AIXI from Shane Legg’s Extrobritannia presentation. What’s the variable bit? Not the utility function; that’s effectively external and after the fact, and AIXI infers it.
I think I answered that in the other sub-thread descended from the parent coment.
If you’re referring to the parameters t and l, I’ll suggest a googolplex as a sufficiently large number with low Kolmogorov complexity.
No. AIXItl will need to have other complexity—if you want it to work in a reasonable quantity of time—e.g. see, for example:
“Elimination of the factor 2˜l without giving up universality will probably be a very difficult task. One could try to select programs p and prove VA(p) in a more clever way than by mere enumeration. All kinds of ideas like, heuristic search, genetic algorithms, advanced theorem provers, and many more could be incorporated.”″
http://www.hutter1.net/ai/paixi.pdf
It seems that you think “complex” means “difficult.” It doesn’t. Complex means “requires a lot of information to specify.” There are no simple problems with complex solutions, because any specification of a problem is also a specification of its solution. This is the point of my original post.
So: a galaxy-conquering civilisation has low Kolmogorov complexity—because it has a short description—namely “a galaxy-conquering civilisation”???
If you actually attempted to describe a real galaxy-conquering civilisation, it would take a lot of bits to specify which one you were looking at—because the method of getting there will necessarily have involved time-and-space constraints.
Those bits will have come from the galaxy—which is large and contains lots of information.
More abstractly, “Find a root of y = sin(x)” is a simple problem with many K-complex solutions. Simple problems really can have K-complex solutions.
A particular galaxy-conquering civilization might have high Kolmogorov complexity, but if you can phrase the request “find me a galaxy-conquering civilization” using a small number of bits, and if galaxy-conquering civilizations exist, then there is a solution with low Kolmogorov complexity.
Hmm, okay. I should not have said “there are no simple problems with complex solutions.” Rather, there are no simple problems whose only solutions are complex. Are we in agreement?
Joke counterexample:
x^2 = −1 is a simple problem that only has complex solutions. ;)
(Of course, that’s not the meaning of “complex” that you meant.)
Serious counterexample:
The four-color theorem is relatively simple to describe, but the only known proofs are very complicated.
Gah, don’t over-qualify jokes! It’s a supplicating behavior and seeking permission to be funny blunts the effect. Just throw the “X^2 = −1” out there (which is a good one by the way) and then go on to say “A more serious counterexample”. That’s more than enough for people to ‘get it’ and anyone who doesn’t will just look silly.
This is the Right (Wedrifid-Laughter-Maximising) thing to do.
I’m sorry. :(
Was that a practical joke on wedrifid?
It is now!
Nice. Die. :P
But that complicated proof could be concisely provided via a universal proof algorithm and the statement of the four color theorem.
Exactly! The Kolmogorov complexity is not very high.
I am not sure.
How about: what is the smallest number that can’t be described by an English sentence of less than ten thousand words? ;-)
Of course, knowing that a K-simple solution existed in the form of the problem specification would not help very much in constructing/implementing it.
Simple in terms of kolmogorov complexity, that is. Simple to do? No.
Who are you referring to here? I myself wrote “Simple values do not necessarily lead to simple outcomes either.”
AFAICT, the origin of these ideas is here:
http://lesswrong.com/lw/l3/thou_art_godshatter/
http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/
http://lesswrong.com/lw/lq/fake_utility_functions/
http://lesswrong.com/lw/y3/value_is_fragile/
This seems to have led a slew of people to conclude that simple values lead to simple outcomes. You yourself suggest that the simple value of “filling the universe with orgasmium” is one whose outcome would mean that “the future of the universe will turn out to be rather simple”.
Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity—in addition to lots of orgasmium.
...but not in the least convenient possible world with an ontologically simple turn-everything-into-orgasmium button; and the sort of complexity that you mention that (I agree) would be involved in the actual world isn’t a sort that most people regard as terminally valuable.
Here we were talking about a superintelligent agent whose “fondest desire is to fill the universe with orgasmium”. About the only way such an agent would fail to produce enormous complexity is if it died—or was otherwise crippled or imprisoned.
Whether humans would want to live—or would survive in—the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me—and it seems rather irrelevant to the point under discussion.
Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?
Human morality is the point under discussion, so of course it’s relevant. It seems clear that the chief kind of “complexity” that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.
Re: Did you even read what I wrote?
I think I missed the bit where you went off into a wild and highly-improbable fantasy world.
Re: Human morality is the point under discussion
What I was discussing was the “tendency to assume that complexity of outcome must have been produced by complexity of value”. That is not specifically to do with human values.