AI
Monday, December 15, 2003
  Another anecdote that supports the computers-take-over theory comes from my Grad project interview with Larry Christiansen, U.S. Chess Champ in 1980, '83, and '02. I asked Larry whether and when he thought machines would be unbeatable by the best human players. He said that they already are unbeatable in short games like blitz (5 minutes) or rapid (up to 30 minutes), because even the most experienced human player can't keep up with the computer's calculation speed in games as short as these. Using a standard tournament clock, humans can still compete, says Larry, but not for long; he thinks it will be 5-10 years before Human/Computer standard tournaments are moot.

Larry knows the programmers from the Big Blue team, and he said that there is nothing to Kasparov's claim that a temporary shutdown of Big Blue helped it win the final game.

So while chess may never be reduced to an equation, it seems to have been solved with speed.

 
Thursday, December 11, 2003
  I had the previous post (Um, OK... etc.) on my hard drive for something like a month or two. I wasn't sure what else I wanted to say, or whether I believed what I had already said. This goes counter to the blogging no-edit ethos, I'm sure, but nobody has taught me the blogging ethos, I'm just sort of assuming it - so I'm probably making it up anyway. It's my blog, I'll edit if I want to. Or something.

Anyway, I saw a Nova I think, or a Scientific American or some other show, but I think it was Nova. I just thought you might be interested to know that somebody thought enough of my ideas not only to do it, but to make a TV show about it. (Tongue in cheek, obviously. Not only does the following probably confirm that my ideas are pedestrian, but that they are considered obvious by most in the field.) They (Case Western, maybe?) made a physiq of water, came up with several tonkatoy type pieces that could fit together, move, and "joint" in different ways, an algorithm to join the pieces together more or less randomly, another algorithm (I would imagine) that told it to move in more or less all the ways it could move, and said, SWIM!!! Each creation graded itself on its swimming ability. Successful swimmers got to "breed" again. (Sounds too much like college now. Just kidding.) Some crazy-ass little swimmers came out, not least of which was one that could only be called sperm-like.

Another school did a similar thing with a land physiq, and told its creations to walk. Which they did - not very well, but they walked - some of them anyway. It was disconcerting to see what you would have to struggle not to call "an organism" born, struggle, squim and fall over, helpless, a dead-end in natural selection - except that it is patently not organic, and the selection is absolutely artificial. The birthing is fake, they raw materials are fake - but the physiq is more or less real, and the effort of the creation to execute its task is still compelling somehow. Even the most successful entries (so far) in this fake contest of life generally seem to me not to be walkers, but odd creations that were apparently "designed" for some other purpose, that could also move around a bit, like a tent frame that by moving its center pole could shuffle around a few inches. Curious. Limits of the initial conditions of the game, I would imagine. Anyway, these researchers weren't as impressive in the simulation arena imho as the swimming folk were, but they went one step further (or back, depending on how much you value the physical world - which by the way is the fault in the Ontological Proof of the existence of God... but that's for another time in another blog.) They attached their computer where the simulation lived to a sort of general manufacturing apparatus, sort of like a kiln combined with a pottery wheel, capable of creating out of some sort of molded plastic-like muck (polymer maybe?) the model that the simulation had come up with. (!) You open the door, and there is the precise construct you just saw on the computer screen. A human plugs in a little motor where the computer tells it to, which provides the muscle to make the joints move (which the computer did in the physiq, of course), and presto - the thing should walk. Then they melt the sucker back down so the thing can create whatever it thinks of next.

It's a little spooky, actually. The machine comes up with some never-before-seen construct that walks in some way or other - whether it's a mangled paper-clip looking thing limping along by a free wire, or a tall drink-o-water who decided its primary mode of transport would actually be to fall down - gracefully, fluidly, with some degree of rigidity and curve, so that the fall continued until it was upside down. It would then continue to fall down again a few times, slinky-like, until it was out of momentum. Anyway, whatever the creation that the "random" engine came up with happens to be - paper clip, neo-slinky, warped swingset with an extra leg, whatever - if it grades out well, machines whirr, plate spins, blades shape, heat bakes, and bing! Your walker is ready.

Most fascinating. Not sure if things like this encourage my thoughts or discourage them. But there it is.  
  Um, OK. I'm going to try to get some things down that have been rattling around in my head. Most have been rattling for some time, some just since I started reading Darwin Among the Machines, which, by the way, appears so far to be an ambitious book executed in a helter-skelter, ad-hoc way, with some great ideas and research mixed with some ambiguous lines of logic and fuzzy thinking. In other words, it seems to have aspects of greatness mixed with fallibility and some amateurish execution, which should do nothing but encourage everybody out there with an idea to write a book.

Anyway.

I have referred to an opinion I hold that the part of the practical beauty and creativity of AI would arise if we were able to present a problem to it in terms it can understand, and have it work out solutions unguided that we would never have thought of, with as few rules of thumb or guidelines or shortcuts or heuristics as possible. (The most ready application I can think of for this sort of problem solving would be in design, and so I tend to focus on that, though the ability to solve more general problems is also possible.) This has little to do with a Turing test or HAL 2000 or machines with self awareness, but since it has to do with knowledge representation and creative "thought" without human intervention, I'll continue to use the term AI with this.

This sort of AI design system would obviously have to have methods of attacking the problem built in, and some of them would no doubt be methodical, systematic problem solving methods that we have developed, like linear programming, statistical analysis, probabilistic methods, finite element analysis and structural methods, thermodynamics, all the branches of physical and theoretical science and math including algebra, geometry, calculus, graphical methods, differential equations, formal logic and so forth, and of course the old reliable brute force techniques. Reactive techniques like pattern recognition and data mining, that seek to make sense of existing data, would also be a part of the toolset I'm sure. Once you have this toolset, there is enormous challenge in programming a comprehensive "brain" or overmind to analyze the problem, coordinate the execution of these methods and weigh and combine the results, even to analyze a problem that is precisely and meticulously designed specifically for the brain, laid out in a format ideally suited to the brain's processes. If AI becomes widely useful in solving problems, I believe there will be "brain companies" that will have as their primary missions to arrange and publish these "brains" and their toolsets - like the Mathematica suites they used to make and probably still do. I know there are many many programs out there that consist of frameworks to present problems to a processor for analysis by one or several methods; this will be similar, but infinitely more flexible. The layout of these brains and their tools, and the design of the interface, would be a worthy life's work for any 10 bright inventors, I think. I don't mean to minimize this task.

However, I will again state my opinion that this would be a creative and flexible and useful application of current human methods, and nothing more. While this may comprise artificial intelligence by some definitions, and certainly would greatly increase the utility of processors in aiding the creative process, nothing would be gained by using this tool that enough trained people given a reasonable amount of time couldn't also do. It's a timesaver, essentially.

An order-of-magnitude breakthrough in the solution of problems by a largely independent technological entity, however, will necessarily involve the ability of a machine to theorize about solutions, test them out, and adjust its hypothesis based on the results. An iterative process is the likeliest way that a machine will be able to proceed in an extended creative analytical process without the intervention of a human to evaluate intermediate results, re-set the problem in light of the results and start the process again. (This is how we currently design things, typically - we come up with hypotheses, design plans using technology but directed by us, construct models using technology but directed by us, perform tests using technology but directed by us, and then rinse and repeat. This rinse and repeat - evaluating the results, tweaking or reworking the hypothesis, and resetting the process is often very time and labor intensive, and is often the limiting factor in how many design iterations are allowed before the engineers are shot and production begins.) This ability of an AI system to make theories and test them out is probably a requirement, but I see only two ways to allow the machines this luxury: First, program the systems with the ability to make models of their tentative solutions, allow them access to physical facilities with appropriate robots and the ability to program them, and access to whatever proving ground is required for the problem at hand, along with measuring equipment. This has obvious problems (although, for specific kinds of problems, this isn't as ludicrous as one might think at first.) That brings us to the second option: allow the computer the use of a virtual test bed, or a consistent logical sandbox it can use to test its theories without mucking around with our physical world with all its costs and limitations. Meaning, a virtual world to test its solutions on - or more accurately, in.

This is a huge jump.

What we're talking about is programming a virtual reality universe in which friction is proportional to mass, where gravity is inversely proportional to the square of the distance between two objects, and so on. Newton's laws of motion hold. Relativity holds. Quantum mechanics holds. Acids and bases react. Iron rusts. Brittle items break, while plastic items stretch. Things we don't understand the causes of but are observable (like turbulent fluid flow, dark matter gravity and electron spin symmetry) can be included too. The more complete the physics of the model, the more accurate the results, but for most applications, (designing an aerodynamic car, for example), quantum effects might not have much bearing (much less the gravitational pull of dark matter). For these applications, you might be able to buy a slimmer, less complete brain with less of our observable world built in. It is not necessary to present a visual interface for this space; in effect, it is a set of behaviors that describes in equations the forces that govern the behavior of our world. But the world has to be ready for any object you throw into it, and it has to respond similarly to the real world within the acceptable tolerances. With my limited experience, I haven't heard of such a thing, so in my little world I live in, I get to call it whatever crazy name I feel like. Let's call it a physiq - that is, a coherent, virtual universe for use in developing and testing theories, which includes a framework for introducing objects, behaviors, or systems of objects and behaviors, and which produces results analogous to the real world. (Kenley Technologies' "Milky Way Light," the easy, flexible, affordable physiq for everything you need to do, on sale now, only $1995!)

(By the way - this ability to automatically rinse and repeat is one of the reasons chess has been so successful and common as a playground for AI - it's a finite universe with discrete success and failure criteria - a win or a loss - and it's easily represented in the abstract. So, a system can be programmed with the rules, hypothesize, that is, put forth a possible move, and then analyze possible following moves, then do it again, all without having to actually do anything in the physical world. If the AI had to actually make all the moves it analyzes on a real chessboard using a robotic arm in order to analyze lines of play, the efficiency of the process would obviously be drastically reduced. Deep Blue gets to do the rinse and repeat of the design process without any human intervention at all.)

Now that we have a sandbox in which our "brain" can test its model, we get to tell our brain what we are looking for and set it to working. This "working" is key, of course. The toolsets and physiqs of our AI's will continue to improve, and are crucial to the efficacy of this process. But the manner in which the overmind proceeds with the designing of hypotheses and the corresponding models is where the real designing happens. Most "design" in whatever field comes from using rules of design we have learned, applying them in a way that meets design parameters and is pleasing, with the occasional dash instinct or playing to a hunch thrown in. Even when a Eureka moment occurs these days, it is often in the design or redesign of a small component of a larger system.

However, here is where our AI shines. Our AI, with its huge processing power and quick iterations, will, if we want it to, have the advantage of redesigning everything as many times as it wants, from the ground up. No assumptions, no taken-for-granteds. No dogma holding it back. We can ignore all the things we think we know, the assumptions we make about design. We can start our AI back in the figurative primordial ooze, saying here is a table of substances we know of, here are costs of labor and raw materials and energy, here is what we're trying to accomplish, now go at it.

The likely engine of this process? Evolution and related techniques. Mutation, randomness and survival of the fittest.

More on that at another time.

The ability of computers to provide a forum or a framework for the evolution of systems, combined with automated problem solving methods, an interative design process, an interface for presenting a problem and a productive overmind... Here we have not only the makings of artificial intelligence, but an automated system that fundamentally exceeds what a human brain could even theoretically achieve. It would be a combination of reason and nature.

Now we're talking.

 
AI

ARCHIVES
08/01/2003 - 09/01/2003 / 09/01/2003 - 10/01/2003 / 11/01/2003 - 12/01/2003 / 12/01/2003 - 01/01/2004 / 03/01/2004 - 04/01/2004 /


Powered by Blogger