Dropout Regularization in Deep Learning Models With Keras

A simple and powerful regularization technique for neural networks and deep learning models is dropout.

In this post you will discover the dropout regularization technique and how to apply it to your models in Python with Keras.

After reading this post you will know:

  • How the dropout regularization technique works.
  • How to use dropout on your input layers.
  • How to use dropout on your hidden layers.
  • How to tune the dropout level on your problem.

Let’s get started.

Dropout Regularization in Deep Learning Models With Keras
Photo by Trekking Rinjani, some rights reserved.

Dropout Regularization For Neural Networks

Dropout is a regularization technique for neural network models proposed by Srivastava, et al. in their 2014 paper Dropout: A Simple Way to Prevent Neural Networks from Overfitting (download the PDF).

Dropout is a technique where randomly selected neurons are ignored during training. They are “dropped-out” randomly. This means that their contribution to the activation of downstream neurons is temporally removed on the forward pass and any weight updates are not applied to the neuron on the backward pass.

As a neural network learns, neuron weights settle into their context within the network. Weights of neurons are tuned for specific features providing some specialization. Neighboring neurons become to rely on this specialization, which if taken too far can result in a fragile model too specialized to the training data. This reliant on context for a neuron during training is referred to complex co-adaptations.

You can imagine that if neurons are randomly dropped out of the network during training, that other neurons will have to step in and handle the representation required to make predictions for the missing neurons. This is believed to result in multiple independent internal representations being learned by the network.

The effect is that the network becomes less sensitive to the specific weights of neurons. This in turn results in a network that is capable of better generalization and is less likely to overfit the training data.

Get Started in Deep Learning With Python

Deep Learning with Python Mini-Course

Deep Learning gets state-of-the-art results and Python hosts the most powerful tools.
Get started now!

PDF Download and Email Course.

FREE 14-Day Mini-Course on 
Deep Learning With Python


Download Your FREE Mini-Course

 

 Download your PDF containing all 14 lessons.

Get your daily lesson via email with tips and tricks.


Dropout Regularization in Keras

Dropout is easily implemented by randomly selecting nodes to be dropped-out with a given probability (e.g. 20%) each weight update cycle. This is how Dropout is implemented in Keras. Dropout is only used during the training of a model and is not used when evaluating the skill of the model.

Next we will explore a few different ways of using Dropout in Keras.

The examples will use the Sonar dataset. This is a binary classification problem where the objective is to correctly identify rocks and mock-mines from sonar chirp returns. It is a good test dataset for neural networks because all of the input values are numerical and have the same scale.

The dataset can be downloaded from the UCI Machine Learning repository. You can place the sonar dataset in your current working directory with the file name sonar.csv.

We will evaluate the developed models using scikit-learn with 10-fold cross validation, in order to better tease out differences in the results.

There are 60 input values and a single output value and the input values are standardized before being used in the network. The baseline neural network model has two hidden layers, the first with 60 units and the second with 30. Stochastic gradient descent is used to train the model with a relatively low learning rate and momentum.

The the full baseline model is listed below.



Running the example generates an estimated classification accuracy of 82%.




Using Dropout on the Visible Layer

Dropout can be applied to input neurons called the visible layer.

In the example below we add a new Dropout layer between the input (or visible layer) and the first hidden layer. The dropout rate is set to 20%, meaning one in 5 inputs will be randomly excluded from each update cycle.

Additionally, as recommended in the original paper on Dropout, a constraint is imposed on the weights for each hidden layer, ensuring that the maximum norm of the weights does not exceed a value of 3. This is done by setting the W_constraint argument on the Dense class when constructing the layers.

The learning rate was lifted by one order of magnitude and the momentum was increase to 0.9. These increases in the learning rate were also recommended in the original Dropout paper.

Continuing on from the baseline example above, the code below exercises the same network with input dropout.



Running the example provides a nice lift in classification accuracy to 86%.




Using Dropout on Hidden Layers

Dropout can be applied to hidden neurons in the body of your network model.

In the example below Dropout is applied between the two hidden layers and between the last hidden layer and the output layer. Again a dropout rate of 20% is used as is a weight constraint on those layers.



We can see that for this problem and for the chosen network configuration that using dropout in the hidden layers did not lift performance. In fact, performance was worse than the baseline.

It is possible that additional training epochs are required or that further tuning is required to the learning rate.




Tips For Using Dropout

The original paper on Dropout provides experimental results on a suite of standard machine learning problems. As a result they provide a number of useful heuristics to consider when using dropout in practice.

  • Generally use a small dropout value of 20%-50% of neurons with 20% providing a good starting point. A probability too low has minimal effect and a value too high results in under-learning by the network.
  • Use a larger network. You are likely to get better performance when dropout is used on a larger network, giving the model more of an opportunity to learn independent representations.
  • Use dropout on incoming (visible) as well as hidden units. Application of dropout at each layer of the network has shown good results.
  • Use a large learning rate with decay and a large momentum. Increase your learning rate by a factor of 10 to 100 and use a high momentum value of 0.9 or 0.99.
  • Constrain the size of network weights. A large learning rate can result in very large network weights. Imposing a constraint on the size of network weights such as max-norm regularization with a size of 4 or 5 has been shown to improve results.

More Resources on Dropout

Below are some resources that you can use to learn more about dropout in neural network and deep learning models.

Summary

In this post you discovered the dropout regularization technique for deep learning models. You learned:

  • What dropout is and how it works.
  • How you can use dropout on your own deep learning models.
  • Tips for getting the best results from dropout on your own models.

Do you have any questions about dropout or about this post? Ask your questions in the comments and I will do my best to answer.

Do You Want To Get Started With Deep Learning?

Deep Learning With Python

You can develop and evaluate deep learning models in just a few lines of Python code. You need:

Deep Learning With Python

Take the next step with 14 self-study tutorials and
7 end-to-end projects.

Covers multi-layer perceptrons, convolutional neural networks, objection recognition and more.

Ideal for machine learning practitioners already familiar with the Python ecosystem.

Bring Deep Learning To Your Machine Learning Projects