The current state of a game is small, but the branching of future states based on legal moves is pretty huge. According to https://www.chessprogramming.org/Chess_Position, there are as many as 10^46 reachable positions, though presumably real games prune that pretty heavily. In fact, we probably have storage and processing nowadays to build the complete lookup table if someone wanted to spend the money.
Basically, the rules are such that the tree of possible future states (for which evaluation means solving for each future state, to find the leaf outcome if the player takes the “best” branch at each node and the opponent takes the “worst”. So a very large amount of the future space needs to be searched.
Ok, let’s go with chess. For that game there is an optimal balance between the tree search and the evaluation function. The search is exploratory. The evaluation is a score.
The evaluation can obviously predict the search to some degree. Humans are very bad at searching, still some can win against computers.
The search is decompressing the information to something more easily evaluated by a computer. A human can do it with much less expansion. Just a matter of Hardware or is it because the information was there all along and just needed a “smarter” analysis?
I may not have been clear enough. The evaluation _IS_ a search. The value of a position is exactly the value of a min-max adversarial search to a leaf (game end).
Compression and caching and prediction are ways to work around the fact that we don’t actually have the lookup table available.
Then you are wrong because since the search usually does not reach the chess mate state, there is always a scoring heuristic replacing the further exploration search at some dept.
I know and had read chessprogramming prior to your post, you are wrong to assume that I am a total idiot just because I got myself confused.
Didn’t mean to condescend, I was mostly pointing out that complexity is in the iteration of simple rules with a fairly wide branching. I will still argue that all the heuristics and evaluation mechanisms used by standard engines are effectively search predictions, useful only because the full search is infeasible, and because the full search results have not been memoized (in the not-so-giant-by-today’s-standards lookup table of position->value).
The current state of a game is small, but the branching of future states based on legal moves is pretty huge. According to https://www.chessprogramming.org/Chess_Position, there are as many as 10^46 reachable positions, though presumably real games prune that pretty heavily. In fact, we probably have storage and processing nowadays to build the complete lookup table if someone wanted to spend the money.
Basically, the rules are such that the tree of possible future states (for which evaluation means solving for each future state, to find the leaf outcome if the player takes the “best” branch at each node and the opponent takes the “worst”. So a very large amount of the future space needs to be searched.
How simple-seeming rules can generate large future state-spaces is a fascinating and important topic. Compare with https://en.wikipedia.org/wiki/Mandelbrot_set—it’s a single, simple calculation: z^2 + c, repeated many times to determine whether it converges for any given point. https://en.wikipedia.org/wiki/Chaos_theory makes for a fun rabbit-hole.
Ok, let’s go with chess. For that game there is an optimal balance between the tree search and the evaluation function. The search is exploratory. The evaluation is a score.
The evaluation can obviously predict the search to some degree. Humans are very bad at searching, still some can win against computers.
The search is decompressing the information to something more easily evaluated by a computer. A human can do it with much less expansion. Just a matter of Hardware or is it because the information was there all along and just needed a “smarter” analysis?
I may not have been clear enough. The evaluation _IS_ a search. The value of a position is exactly the value of a min-max adversarial search to a leaf (game end).
Compression and caching and prediction are ways to work around the fact that we don’t actually have the lookup table available.
Then you are wrong because since the search usually does not reach the chess mate state, there is always a scoring heuristic replacing the further exploration search at some dept.
I know and had read chessprogramming prior to your post, you are wrong to assume that I am a total idiot just because I got myself confused.
Didn’t mean to condescend, I was mostly pointing out that complexity is in the iteration of simple rules with a fairly wide branching. I will still argue that all the heuristics and evaluation mechanisms used by standard engines are effectively search predictions, useful only because the full search is infeasible, and because the full search results have not been memoized (in the not-so-giant-by-today’s-standards lookup table of position->value).
Put that way I completely agree.