ARCHIVED CONTENT
You are viewing ARCHIVED CONTENT released online between 1 April 2010 and 24 August 2018 or content that has been selectively archived and is no longer active. Content in this archive is NOT UPDATED, and links may not function.Extract from article by Gregory Platetsky
Deep Learning uses multiple layers of neural networks, where each layer has many artificial “neurons” – simple units that have many inputs and one output. Neural networks learns from experience – they take inputs, combine it inside neurons with different weights, and give outputs, which can go to the next layer to be final output, for example, yes/no, dog/cat, or how to move your character in a video game.
Neural networks learn by backpropagation – the output they generate is compared to correct one, and errors are propagated back to change the weights so that next time the same inputs give output closer to correct one.
Neural network methods have been tried back in 1960s, but one-layer methods were limited in their ability, and multiple-layer methods took so long to train that they were considered impractical. Work on neural networks was very unfashionable for many years, and in 2000s was mainly sustained by 3 researchers: Geoff Hinton in Toronto, Yann Lecun at Bell Labs, and Yoshua Bengio in Montreal.
Read the original article at: Deep Learning from 30,000 feet