2015Highway and Residual Networks Developed for Deep Learning

Two techniques were developed concurrently to train very dee...
Timelines Logo
Year
1965
1967
1970
2015

🚀 Development of First Deep Learning Algorithm

Alexey Ivakhnenko and Valentin Lapa developed the first Deep learning algorithm for multilayer perceptrons in Soviet Union.
Development of First Deep Learning Algorithm (1965)
Deep LearningAlexey IvakhnenkoValentin LapaMultilayer PerceptronsNeural NetworksEarly Deep Learning
Soviet UnionSoviet Union

🧠 Stochastic Gradient Descent for Deep Learning

Shun'ichi Amari was the first to use Stochastic gradient descent for Deep learning in multilayer perceptrons. In computer experiments conducted by his student Saito, a five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes.
Stochastic Gradient Descent for Deep Learning (1967)
Deep LearningStochastic Gradient DescentMultilayer PerceptronsAmariNeural NetworksPattern Recognition
JapanJapan

🧮 Linnainmaa Publishes Reverse Mode Automatic Differentiation

Seppo Linnainmaa publishes the reverse mode of Automatic differentiation. This method became later known as Backpropagation, and is heavily used to train Artificial neural networks.
Linnainmaa Publishes Reverse Mode Automatic Differentiation (1970)
BackpropagationAutomatic DifferentiationNeural NetworksMachine LearningDeep LearningOptimizationTraining AlgorithmsSeppo Linnainmaa
FinlandFinland

➕ Highway and Residual Networks Developed for Deep Learning

Two techniques were developed concurrently to train very deep networks: Highway network, and the Residual neural network (ResNet). They allowed over 1000-layers-deep networks to be trained.
Highway and Residual Networks Developed for Deep Learning (2015)
Deep LearningNeural NetworksHighway NetworksResidual NetworksResNetTraining Techniques