• Setting up Knet
    • Installation
    • Tips for developers
    • Using Amazon AWS
  • Introduction to Knet
    • Contents
    • Installation
    • Examples
    • Benchmarks
    • Function reference
    • Optimization methods
    • Under the hood
    • Contributing
  • Backpropagation
    • Partial derivatives
    • Chain rule
    • Multiple dimensions
    • Multiple instances
    • Stochastic Gradient Descent
    • References
  • Softmax Classification
    • Classification
    • Likelihood
    • Softmax
    • One-hot vectors
    • Gradient of log likelihood
    • MNIST example
    • Representational power
    • References
  • Multilayer Perceptrons
    • Stacking linear classifiers is useless
    • Introducing nonlinearities
    • Types of nonlinearities (activation functions)
    • Representational power
    • Matrix vs Neuron Pictures
    • Programming Example
    • References
  • Convolutional Neural Networks
    • Motivation
    • Convolution
    • Pooling
    • Normalization
    • Architectures
    • Exercises
    • References
  • Recurrent Neural Networks
    • References
  • Reinforcement Learning
    • References
  • Optimization
    • References
  • Generalization
    • References
 
Knet.jl
  • Docs »
  • Generalization
  • Edit on GitHub

Generalization¶

References¶

  • http://www.deeplearningbook.org/contents/regularization.html
  • https://d396qusza40orc.cloudfront.net/neuralnets/lecture_slides/lec9.pdf
  • https://d396qusza40orc.cloudfront.net/neuralnets/lecture_slides/lec10.pdf
  • http://blog.cambridgecoding.com/2016/03/24/misleading-modelling-overfitting-cross-validation-and-the-bias-variance-trade-off/
Previous

Revision 8a10ace4.