predicting the best possible move in a given chess position
In order to do this you need training data on what the optimal move is. This may not exist, or limits you to only doing as good as the player you are predicting.
Additionally, predicting is inherently less optimal than search, unless your predictions are 100% perfect. You are choosing moves because you predict they are optimal, rather than because it’s the best move you’ve found. If for example, you try to play by predicting what a chessmaster would do, your play will necessarily be worse than if you just play normally.
What an ideal chess player does? It predicts which move is optimal. May be a tricky feat, but he is good and predicts it well.
I looked this thread in past minutes and I clearly saw this “ideological division”. Few people thinks as I do. Other say—you can’t solve causal problems with a mere prediction. But don’t give a clear example.
Don’t you agree, that an ideal “best next chess move predictor” is the strongest possible chess player?
Maybe it would be useful to define terms, to make things more clear.
If you have a time-process X, and t observations from this process, a predictor comes up with a prediction as to what X_t+1 will be.
On the other hand, given a utility function f() on a series of possible outcomes Y from t+1 to infinity, a decision maker finds the best Y_t+1 to choose to maximize the utility function.
Note that the definition of these two things is not the same: a predictor is concerned about the past and immediate present, whereas a decision maker is concerned with the future.
In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. “Being able to predict what a user is going to do next is a key feature”
But not predicting everything they do and exactly what they’ll type.
I concur. To predict, is everything there is about intelligence, really.
If a program could predict what I am going to type in here, it would be as intelligent as I am. At least in this domain. It could post instead of me.
But the same goes for every other domain. To predict every action of an intelligent agent, is to be as intelligent as he is.
I don’t see a case, where this symmetry breaks down.
EDIT: But this is an old idea. Decades old, nothing very new.
You’re talking about predicting the actions of an intelligent agent.
LeCun is talking about predicting the environment. These are two different concepts.
No, they are not. Every intelligent agent is just a piece of environment.
Intelligence can exist even in isolation from any other intelligent agents. Indeed, the first super-intelligent agent is likely to be without peer.
Look! The point is about predicting and intelligence. Doesn’t matter what a predictor has around itself. It’s just predicting. That’s what it does.
And what does a (super)intelligence? It predicts. Very good, probably.
A dichotomy is needless.
Some examples:
predicting the solution of a partial deferential equation
predicting the best method to solve the given equation
predicting how a process might behave
predicting the best action you may take to achieve a goal
predicting the best possible move in a given chess position
predicting what a cyphered message is about …
I predict, you can’t give me a counterexample. Where an obviously intelligent solution can’t be regarded as a prediction.
This went under the name of SP theory, long ago. That the prediction, compression and intelligence are the same thing, actually.
http://www.researchgate.net/publication/235892114_Computing_as_compression_the_SP_theory_of_intelligence
Almost tautological, but inescapable.
In order to do this you need training data on what the optimal move is. This may not exist, or limits you to only doing as good as the player you are predicting.
Additionally, predicting is inherently less optimal than search, unless your predictions are 100% perfect. You are choosing moves because you predict they are optimal, rather than because it’s the best move you’ve found. If for example, you try to play by predicting what a chessmaster would do, your play will necessarily be worse than if you just play normally.
They are closely related but not the same thing.
A counterexample is chess.
What an ideal chess player does? It predicts which move is optimal. May be a tricky feat, but he is good and predicts it well.
I looked this thread in past minutes and I clearly saw this “ideological division”. Few people thinks as I do. Other say—you can’t solve causal problems with a mere prediction. But don’t give a clear example.
Don’t you agree, that an ideal “best next chess move predictor” is the strongest possible chess player?
Maybe it would be useful to define terms, to make things more clear.
If you have a time-process X, and t observations from this process, a predictor comes up with a prediction as to what X_t+1 will be.
On the other hand, given a utility function f() on a series of possible outcomes Y from t+1 to infinity, a decision maker finds the best Y_t+1 to choose to maximize the utility function.
Note that the definition of these two things is not the same: a predictor is concerned about the past and immediate present, whereas a decision maker is concerned with the future.
This “t+1” might be “t+X”. Results for a large X may be very bad. So as results for “t+1” may be bad. Still he do his best predictions.
He predicts the best decision, which can be taken.
In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. “Being able to predict what a user is going to do next is a key feature”
But not predicting everything they do and exactly what they’ll type.