Geoffrey Hinton won the 2012 Killam Prize for
Engineering and the 2010 Gerhard Herzberg Gold Medal for Science and Engineering for his contributions to machine learning and
artificial intelligence. Hinton joined U of T in 1987 after stints at Sussex University, the University of California San Diego and Carnegie Mellon University. He has a PhD from the University of Edinburgh.
It took nature more than two billion years of incremental change to get from the first complex cells to the sophisticated thinking machine that is the human brain. Artificial intelligence researchers have covered much of that same ground in under a century.
Digital brains continue to gain on people.
Just in the past year, artificial neural networks took a big leap forward in their ability to recognize objects — one of the basic skills necessary to understand the world.
Today, a person and a computer who are flashed the same image have similar accuracy rates at identifying its content, though the person wins if she can move her eyes to focus on different parts of the image.
Quick object recognition, though, is just a small component of human intelligence.
“The thing that is special about people is recursion. We’re good at instant vision and recognition, but we’re excellent at putting those instants together,” says computer science professor Geoffrey Hinton. “The neural net people haven’t gotten very far with the sequential aspect of it — putting together those rapid glimpses into a coherent whole.”
You look at a tree. You think, “It’s a tree.” Then the interesting part happens: your neurons and signaling pathways subtly
shift — feedback messages shoot back and forth, and neural
Experiencing that tree alters your overall understanding of the world, better preparing your brain to interpret the next sensory experience. This adaptive “learning algorithm” is what allows us to have complex intelligence.
“Suppose I grow a plant in a flowerpot with a very complicated internal shape. Then I remove the flowerpot, leaving a root system with that same shape. Where did that complexity come from?
All you need is an adaptive algorithm in the plant’s DNA that
tells it to fill up space,” says Hinton. “It’s similar with the brain:
a simple and very powerful learning algorithm creates complicated knowledge structures by adapting to complex structure
in the external world.”
In nature, such an algorithm developed through gradual change over thousands of generations. Researchers aren’t limited to such a process — they can look for solutions in places evolutionary biology couldn’t go. It turns out, though, that nature’s model provides the most promising results.
“Curiously, making an artificial neural network similar to the brain appears to make it work better,” Hinton says. “It doesn’t have to be the case, but for perception, it seems to be.”
Hinton doesn’t speculate on when computers will make the leap that humans made millions of years ago — a learning algorithm that efficient is still some way off.
“We’re still trying to match the intelligence of cats and pigeons,” he says.
Mammals and birds, though, only emerged 200 million years ago — they represent the last 10 per cent of the evolutionary arc. Neural networks and the computers on which they run are still in their infancy.
What happens when we use our brains to create equally powerful artificial brains? Even Hinton agrees that we might find out in our lifetime.