AI
Thursday, August 28, 2003
  Scott,

Your comments about AI in the wake of Kramnik's draw with Deep Fritz got me thinking...


(An interesting theory says that a creation can never exceed its creator - i.e. God could never create something greater than God, and we could never invent a brain smarter than our own brain. I hope that isn't true, at least in limited contexts - I believe we can give a machine a framework to create and innovate, test its own creative theories, and integrate new knowledge into its existing framework without human intervention, and that given a suitable process and framework to think and grow it could, um, evolve to be better than we are. Directed by our own efforts, no, I don't think our creations can exceed us. In that light, I offer the following.)

I think the Chess machines are limited by the same thing almost all "artificial intelligence" efforts are limited by these days - we make them think more or less like humans instead of like Gods. Both teams (Deep Blue and Deep Fritz) are efforts not to perfect chess or to add to chess knowledge, but to encapsulate an functional encyclopedia of what we know about chess - our conventional knowledge. Each piece is issued a point value (which changes a bit based on the situation, but not much.) Castled kings are better than uncastled kings. A square in the middle is better than a square on the edge. Covered pawns are better than loose pawns. Paired knights and bishops are better than one of each. Etc. etc. etc. They sought to use brute force to go through all possible positions - but they needed human evaluations to say "this position is better than that position" to direct the search. Rules are used for evaluating positions, and then other rules are used for pruning the branches of their search, so that they could discard branches unlikely to produce ideal moves. They go further with these rules, or heuristics, with extensive opening books and endgames, known gambits and lines of play, and even finetuning for individual opponents.

Heuristics are shortcuts. They make the project feasible at this point, but they also place a limit on the quality of the tool we produce. Given the current strategy of producing chess programs, the ultimate product is simply the perfect historical chess player. It will know every noteworthy game played, know every line, adjust values of positions and pieces based on the historical games of its opponents, and find positions and moves no human would ever have found, not because it thinks better than humans, but because it thinks FASTER and more perfectly. It never forgets, never gets confused in its thinking, is absolutely methodical. The perfect result of this endeavor would be the ideal exponent of current human chess theory. Would a person ever be able to beat that machine? Only if that person has knowledge the machine doesn't have, either because it knows more than the programmers, or because the person has innovated, that is, added to the available chess knowledge on Earth. And if the player innovated, could they win a game? Even a series? Sure, until the programmers analyzed the innovation, understood it and successfully added it to the program's repertoire.

This is where chess programs (and AI in general) fail in my opinion. They mimic our understanding of the universe in question (be it the reaches of space or an 8x8 board.) I think AI should answer the question, if God came down and took white, what would His first move be? I posit that we don't have that answer yet. The computer plays our best guess of moves. It uses our rules, our conventional wisdom, our understanding of the problem's universe to make decisions. AI will have reached a critical threshold when we find a framework to present the problem to it and it makes its own decisions about the game.

I believe first off that it is entirely possible that Chess is solveable. I also believe it is entirely possible (though somewhat unlikely) that the ideal first move is 1. Kn-A3 or 1. P-H4. I believe that much of our conventional wisdom on chess is correct; I also believe that even good rules have exceptions, and that not all of our rules are as good as we think they are. They appear good because they are greeted with typical, imperfect, human responses.

When Chess can be presented as a math problem to a mathematical processor, with the only human input being the rules of the game, THEN we will have a machine that can teach us Chess instead of the other way around. And when we have a processor imbued with the laws of physics, and the things we value as "good" as its inputs, that can treat (say) aerodynamics and engine design as a mathematical problem, that is the day we have cars that are environmentally friendly. What are the chances that the gasoline-powered internal combustion engine is the ideal choice for personal transportation? Nil. It's just the best one that occurred to us in the 100 or so years that people were thinking of such things. What are the chances that Ruy Lopez or Guioco Piano or any other line of play is the best of all possible lines of play? Slim.

So, will there ever be a machine we can't beat? Theoretically we will have the slimmest of chances as long as we have the ability to innovate beyond the machine's ability to deal with innovation. But until we eliminate the shortcuts, the human evaluations beyond winning is better than drawing is better than losing, our chess machines will always be cousins of "The Turk", the Chess Automaton containing a human being with all its frailties. And there will always be thinking people to play the Poe, trying to find and expose the charlatan inside.
 
Wednesday, August 27, 2003
  Regarding Artificial Intelligence...  
AI

ARCHIVES
08/01/2003 - 09/01/2003 / 12/01/2003 - 01/01/2004 /


Powered by Blogger