I’m sure he’s not a crank. Which leaves the important question: is he right? I don’t know, but if he is, it’s highly relevant to the question of FAI, and suggests that the MIRI approach of considering an AI as a logical system to be designed to be safe may be barking up the wrong tree. From an interview with Wissner-Gross:
“The conventional storyline [of SF about AI],” he says, “has been that we would first build a really intelligent machine, and then it would spontaneously decide to take over the world.”
But one of the key implications of Wissner-Gross’s paper is that this long-held assumption may be completely backwards — that the process of trying to take over the world may actually be a more fundamental precursor to intelligence, and not vice versa.
...
“Our causal entropy maximization theory predicts that AIs may be fundamentally antithetical to being boxed,” he says. “If intelligence is a phenomenon that spontaneously emerges through causal entropy maximization, then it might mean that you could effectively reframe the entire definition of Artificial General Intelligence to be a physical effect resulting from a process that tries to avoid being boxed.”
But as I said on a previous occasion when this came up, the outside view here is that so far it’s just a big idea and toy demos.
Thank you for your response. Having thought about it for a while, I think he is wrong. (Whether he is a crank is a different issue, probably not worth worrying about)
I think it can be illustrated with the following example:
Suppose you are writing a computer program to find the fastest route between two cities and the computer program must select between two possibilities: Take the express highway or take local roads. A naive interpretation of Wissner-Gross’ approach would be to take the local roads because that gives you more options. However this would not seem to be the more intelligent choice in general. So a naive interpretation of the Wissner-Gross approach appears to be basically a heuristic—useful in some situations but not others.
But is this interpretation of Wissner-Gross’s approach correct? I expect he would say “no,” that taking the express highway actually entails more options because you get to your destination quicker, resulting in extra time which can be used to pursue other activities. Which is fine, but it seems to me that this is circular reasoning. Of course the more intelligent choice will result in more time, money, energy, health, or whatever, and these things give you more options. But this observation tells us nothing about how to actually achieve intelligence. It’s like the investment guru who tells us to “buy low sell high.” He’s stating the obvious without imparting anything of substance.
I admit it’s possible I have misunderstood Wissner-Gross’ claims. Is he saying anything deeper than what I have pointed out?
My thoughts: Yeah he’s wrong. And he got a paper on this junk published in PRL? Sheesh.
He demos a program maximizing some entropy function, and claims intelligent behavior. Well, he could just as easily have made the program try to move everything to the left, and claimed intelligent behavior from that, too. The intelligence was not because of what he maximized, but because of a complex set of behaviors he paid someone to program into the agent but then glossed over.
Is this guy a crank? He seems to be claiming that he has found the E=mc^2 for intelligence, artificial or otherwise.
http://www.exponentialtimes.net/videos/equation-intelligence-alex-wissner-gross-tedxbeaconstreet
My alarm bells are going off but I am interested to hear peoples’ thoughts.
previous discussion also. He has been mentioned several other times without much discussion.
Articles:
http://phys.org/news/2013-04-emergence-complex-behaviors-causal-entropic.html
http://www.newyorker.com/online/blogs/elements/2013/05/a-grand-unified-theory-of-everything.html
http://www.bbc.com/news/science-environment-22261742
Paper:
http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
I’m sure he’s not a crank. Which leaves the important question: is he right? I don’t know, but if he is, it’s highly relevant to the question of FAI, and suggests that the MIRI approach of considering an AI as a logical system to be designed to be safe may be barking up the wrong tree. From an interview with Wissner-Gross:
...
But as I said on a previous occasion when this came up, the outside view here is that so far it’s just a big idea and toy demos.
Thank you for your response. Having thought about it for a while, I think he is wrong. (Whether he is a crank is a different issue, probably not worth worrying about)
I think it can be illustrated with the following example:
Suppose you are writing a computer program to find the fastest route between two cities and the computer program must select between two possibilities: Take the express highway or take local roads. A naive interpretation of Wissner-Gross’ approach would be to take the local roads because that gives you more options. However this would not seem to be the more intelligent choice in general. So a naive interpretation of the Wissner-Gross approach appears to be basically a heuristic—useful in some situations but not others.
But is this interpretation of Wissner-Gross’s approach correct? I expect he would say “no,” that taking the express highway actually entails more options because you get to your destination quicker, resulting in extra time which can be used to pursue other activities. Which is fine, but it seems to me that this is circular reasoning. Of course the more intelligent choice will result in more time, money, energy, health, or whatever, and these things give you more options. But this observation tells us nothing about how to actually achieve intelligence. It’s like the investment guru who tells us to “buy low sell high.” He’s stating the obvious without imparting anything of substance.
I admit it’s possible I have misunderstood Wissner-Gross’ claims. Is he saying anything deeper than what I have pointed out?
My thoughts: Yeah he’s wrong. And he got a paper on this junk published in PRL? Sheesh.
He demos a program maximizing some entropy function, and claims intelligent behavior. Well, he could just as easily have made the program try to move everything to the left, and claimed intelligent behavior from that, too. The intelligence was not because of what he maximized, but because of a complex set of behaviors he paid someone to program into the agent but then glossed over.