This story is one of many that reveal the extraordinary legacy of Boundless: the Campaign for the University of Toronto, which ended on December 31, 2018. Read more inspiring stories of impact and discover why more than 100,000 people came together to make the Boundless campaign an historic success.
For over fifty years, artificial intelligence (AI) research tried to automate logical reasoning with the goal of delivering intelligent machines capable of performing what are, for us, effortless cognitive tasks. Progress was slow and we have yet to design a machine that can think by using commonsense. But that might soon change.
How studying biology sparked extraordinary strides in computing
AI research has made an explosive leap forward over the past decade by abandoning logic and turning to biology for inspiration. At the forefront of this advance is Geoffrey Hinton, U of T Distinguished Professor and Google Distinguished Researcher. He and his colleagues have devised multi-layered, artificial neural networks that empower machines not just to recognize patterns from sets of hand-designed features but to create much better features automatically.
The result is the stunning efficiency of “deep learning” that now drives speech recognition, image search, face recognition and an exponentially growing number of other practical applications which will soon include machine translation. Recently, one of Hinton’s graduate students, George Dahl, used multi-layer neural networks to win a public competition for predicting which molecules might be useful to build new drugs—and pharmaceutical companies are now using these networks.
How neural networks and deep learning work
The neural network approach to AI, inspired by the structure of the real brain, is almost as old as AI itself, but for years researchers considered it a dead end. The goal is to discover a learning procedure for multi-layer neural networks that allows each layer to discover new features that are informative combinations of features in the layer below. One of Hinton’s early contributions to artificial neural network research, in 1986, was to show that a learning algorithm called “back-propagation” allowed neural networks to learn interesting and complex internal features with no hand-design by the programmer. But back-propagation learning was glacially slow in neural networks with many layers and it was these deep networks that promised to revolutionize AI.
Students break new ground in speech and image recognition
In 2005, Hinton developed a much more efficient technique for training deep neural networks and in 2009, using the latest graphic processing units developed for video games, two of Hinton’s students demonstrated the practical use of deep neural networks for recognizing the phonemes of human speech—a major test for AI at the time. It was, perhaps, the simplicity of their approach that turned heads: two students managed to beat the best results of mainstream AI research programs that had spent thirty years tackling the problem.
In 2012, Hinton and two more graduate students—Alex Krizhevsky and Ilya Sutskever—developed a deep neural network called Supervision to compete in a public competition to classify objects found in large sets of images. Supervision won by a huge margin and triggered a paradigm shift in Computer Vision. Google promptly bought their startup company, DNN research, and hired the team; Hinton now splits his time between U of T and the Googleplex in Mountain View, California.
Winning several Canadian and international awards and prizes in the past few years, Hinton has also garnered considerable media attention. He says he has been gratified by seeing the work of his group incorporated into our phones, our computers, and generally into our lives. But the biggest prize, for Hinton, remains the holy grail of AI: to build a machine that can learn to think like a human.