Skip to main content
Dropout: A Simple Way to Prevent Neural Networks from Overfitting

Dropout: A Simple Way to Prevent Neural Networks from Overfitting

Basic Information

  • Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov @ Toronto University
  • 2014 JMLR

問題描述

在近年來發現到 Neural Network 參數越多就有越強大的表達能力,並且通常會有更好的表現。不過隨著參數量的上升,我們也發現到模型越來越會傾向於 Overfitting。


...About 10 minNotePaper ReadRegularizationJMLR