AI: A Return to Meaning

David Ferrucci

Computer Scientist and Watson Co-Creator

Artificial Intelligence started with small data and rich semantic theories. The goal was to build systems that could reason over logical models of how the world worked; systems that could answer questions and provide intuitive, cognitively accessible explanations for their results. There was a tremendous focus on domain theory construction, formal deductive logics and efficient theorem proving. The problem, of course, was the knowledge acquisition bottleneck; it was too difficult, slow and costly to render all common sense knowledge into an integrated, formal representation that automated reasoning engines could digest. In the meantime, huge volumes of unstructured data became available, compute power became ever cheaper and statistical learning methods flourished.

AI evolved from being predominantly theory-driven to predominantly data-driven. Automated systems generated output using inductive techniques. Training over machine learning algorithms over massive data produced flexible and capable control systems, powerful predictive engines in domains ranging from language translation to pattern recognition, from medicine to economics. But what do the models mean? From the very inception of Watson, I put a stake in the ground; we will use a diversity of shallow text analytics, leverage loose and fuzzy interpretations of unstructured information. We would allow many researchers to build largely independent NLP components and rely on machine learning techniques to balance and combine these loosely federated algorithms to evaluate answers in the context of human-readable passages. The approach, with a lot of good engineering, worked! Watson became arguably the best factoid question-answering system in the world. WatsonPaths, its descendant, could connect questions to answers over multiple steps, offering passage-based “inference chains” from question to answer without a human writing a single “if-then rule.”

But could it reason over a logical understanding of the domain? Could it fluently dialog or automatically learn from language and build the logical or cognitive structures that enable and precede language itself? Could it understand and learn the meaning behind the words the way we do? This talk draws an arc from Theory-Driven AI to Data-Driven AI and positions Watson along that trajectory. It proposes that to advance AI to where we all know it must go, we need to discover how to efficiently combine human cognition, massive data and logical theory formation. We need to boot strap a fluent collaboration between human and machine that engages logic, language and learning to enable machines to learn how to learn and ultimately deliver on the promise of AI.


Tags: , ,

Location: Salon A
April 11th, 2016
8:45 AM - 9:45 AM