Monthly Archives: January 2006

I’m sorry, Mr. Hawkins, I’m afraid I can’t do that

I recently finished reading Jeff Hawkins’ On Intelligence, which, at least from a layman’s perspective, is one of the most amazing books on artificial intelligence I’ve read in some time. When working on user interfaces all day long, delicately translating the always-logical Yes/No language of computers into the always-fuzzy maybe-I’m-not-sure language of humans, it’s easy to start daydreaming about the possibility that machines could be designed in such a way that they posed something akin to actual intelligence, able to actually understand input and provide sensible output, rather than what some programmer who never met the human they currently are interacting with programmed them to do. Amazingly, if Hawkins ideas are correct, the notion of an intelligent machine may become a reality, I know it sounds like some bad sci-fi flick, but bear with me here. Hawkins begins his work by describing how traditional AI thinking has fixated on the computer as a model of the brain, the thinking being that if we can just produce faster computers and smarter algorithms, we’ll eventually end up with human-level intelligence. This, of course, has not happened. Instead, AI research seems to have ground to a painful halt. It would seem, then, that when Hawkins proposed a fresh idea in an application to the Ph.D program at the vaunted MIT AI Dept. to study the biological makeup of the human brain and instead use that as a model for designing intelligent machines, that he would be welcomed with open arms. Curiously, and very unfortunately for MIT, Hawkins was turned down. So, he decided to instead do his research at a biology dept., which was happy to have him, and where he developed a model for designing intelligent machines that is both utterly brilliant, and yet at the same time almost embarrassingly obvious in hindsight. In a nutshell, Hawkins predictive memory model is based on the idea that all our thinking is tied to comparing the world which we perceive via our senses to a model of the world, which is stored in our brain. In other words, an infant will have a very undeveloped world model, containing only memories following birth (and possibly also from within the womb and maybe even pre-stored information based on the predilections of their parents, though Hawkins doesn’t delve into the nature/nurture issue.) For an infant, their world model, i.e. their perception of what the world in which they exist, is constantly being updated. Constantly encountering new faces, things, words, sounds, virtually every new sensory input represents something that did not match their current model. In turn, the infant will respond to the input it receives, such as attempting to mimic the sounds of its mother’s voice. While massive storage of every piece of input is a corner-stone of Hawkins model, another key element is extremely rich cross-association. Think of it as a relational database in which everything can be related to everything. All the inputs the infant takes in are cross-associated with all other sensory inputs, as well as with dimensional factors, such as time and location. This means that when a new input is received, that input is compared to all previous associations of similar input. If the associations match, any memorized responses to the input would be taken (such as seeing a piece of food and recalling that it tasted good and therefore reaching for it); but if the associations do not match, this will require an update to the world model. In order to update the world model, additional sensory input may be required. So, if the infant sitting in it’s mother’s lap sees something that is on the kitchen table that it does not recognize, but it does associate items being on the kitchen table with good-tasting food, it might test a possible new world model by reaching out and placing the item in it’s mouth. When then discovering that the item clearly is not food and clearly is not tasty, it can update it’s world model to state that all items on a kitchen table are not food. Obviously, there is a lot more to intelligence than comparing an internal world model to the outside world and responding to that, and I am definitely glossing over a lot from Hawkins book. But what really makes the Hawkins model so brilliant is how he shows, by using the construct of the biological brain as a reference, how the traditional AI model will need to be turned on its head for it to work. A core problem of traditional AI, Hawkins shows, is that it attempts to compute a response. He uses the task of catching a ball as an example. Building a robot that can catch a ball is incredibly complex, with calculations of the ball’s trajectory, joint movement, etc. To implement this requires millions of programmed steps. But with the stored memory model, it’s merely a question perceiving the ball and retrieving the stored movements that correspond to this path of the ball and sending those instructions to the respective body parts. Of course, the process of having stored those movements may have taken a lot of trial and error, but just as with the proverbial learning to ride a bicycle, once you know it, you know it, no computation necessary.
Hawkins model provides some interesting food for thought when it comes to structured design. In this model, we gather our requirements, essentially creating a a pre-defined world model, and then implement a solution to function within that world. But while the world model of the design remains static, the real world continues to change. It would be interesting to instead apply Hawkins approach to structured design (and in fact, Hawkins is doing just that with his new venture Numenta.) In this model, we give our machine massive storage and sensory input/output capabilities, as well as the ability to cross-associate all input. Then, instead of programming the machine, we would train it to be able to complete whatever tasks that needed to be supported. Once properly trained, we could choose to either allow the machine to generate new memories and update it’s world model accordingly (in effect, allowing it to self-update) or we could choose to not allow new memories, meaning that it would be an intelligent automaton. The possibility that Hawkins ideas are sound are both exciting and a bit scary. Part of me looks forward to truly useful applications if this brilliant idea; part of me fears something akin to Bill Joy’s article in Wired about how machines will eventually outsmart us.