Chapter 1 strongly advocates the stochastic back-propagation method to train neural networks. This is in fact an instance of a more general technique called stochastic gradient descent (SGD). This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations. Stochastic Gradient Descent Tricks
Document worth reading: “Stochastic Gradient Descent Tricks”
06 Tuesday Oct 2015
Posted Documents
in