• Setting up Knet
    • Installation
    • Tips for developers
    • Using Amazon AWS
  • A Tutorial Introduction
    • 1. Functions and models
    • 2. Training a model
    • 3. Making models generic
    • 4. Defining new operators
    • 5. Training with minibatches
    • 6. MLP
    • 7. Convnet
    • 8. Conditional Evaluation
    • 9. Recurrent neural networks
    • 10. Training with sequences
    • Some useful tables
  • Backpropagation
    • Partial derivatives
    • Chain rule
    • Multiple dimensions
    • Multiple instances
    • Stochastic Gradient Descent
    • References
  • Softmax Classification
    • Classification
    • Likelihood
    • Softmax
    • One-hot vectors
    • Gradient of log likelihood
    • MNIST example
    • Representational power
    • References
  • Multilayer Perceptrons
    • Stacking linear classifiers is useless
    • Introducing nonlinearities
    • Types of nonlinearities (activation functions)
    • Representational power
    • Matrix vs Neuron Pictures
    • Programming Example
    • References
  • Convolutional Neural Networks
    • Motivation
    • Convolution
    • Pooling
    • Normalization
    • Architectures
    • Exercises
    • References
  • Recurrent Neural Networks
    • References
  • Reinforcement Learning
    • References
  • Optimization
    • References
  • Generalization
    • References
 
Knet.jl
  • Docs »
  • Generalization
  • Edit on GitHub

Generalization¶

References¶

  • http://www.deeplearningbook.org/contents/regularization.html
  • https://d396qusza40orc.cloudfront.net/neuralnets/lecture_slides/lec9.pdf
  • https://d396qusza40orc.cloudfront.net/neuralnets/lecture_slides/lec10.pdf
  • http://blog.cambridgecoding.com/2016/03/24/misleading-modelling-overfitting-cross-validation-and-the-bias-variance-trade-off/
Previous

Revision dd76cb25.