Tune Machine Learning Algorithms in R (random forest case study)

It is difficult to find a good machine learning algorithm for your problem. But once you do, how do you get the best performance out of it.

In this post you will discover three ways that you can tune the parameters of a machine learning algorithm in R.

Walk through a real example step-by-step with working code in R. Use the code as a template to tune machine learning algorithms on your current or next machine learning project.

Tune Random Forest in R.
Photo by Susanne Nilsson, some rights reserved.

Get Better Accuracy From Top Algorithms

It is difficult to find a good or even a well performing machine learning algorithm for your dataset.

Through a process of trial and error you can settle on a short list of algorithms that show promise, but how do you know which is the best.

You could use the default parameters for each algorithm. These are the parameters set by rules of thumb or suggestions in books and research papers. But how do you know the algorithms that you are settling on are showing their best performance?

Use Algorithm Tuning To Search For Algorithm Parameters

The answer is to search for good or even best combinations of algorithm parameters for your problem.

You need a process to tune each machine learning algorithm to know that you are getting the most out of it. Once tuned, you can make an objective comparison between the algorithms on your shortlist.

Searching for algorithm parameters can be difficult, there are many options, such as:

  • What parameters to tune?
  • What search method to use to locate good algorithm parameters?
  • What test options to use to limit overfitting the training data?

Tune Machine Learning Algorithms in R

You can tune your machine learning algorithm parameters in R.

Generally, the approaches in this section assume that you already have a short list of well performing machine learning algorithms for your problem from which you are looking to get better performance.

An excellent way to create your shortlist of well performing algorithms is to use the caret package.

For more on how to use the caret package, see:

In this section we will look at three methods that you can use in R to tune algorithm parameters:

  1. Using the caret R package.
  2. Using tools that come with the algorithm.
  3. Designing your own parameter search.

Before we start tuning, let’s setup our environment and test data.

Test Setup

Let’s take a quick look at the data and the algorithm we will use in this case study.

Test Dataset

In this case study, we will use the sonar test problem.

This is a dataset from the UCI Machine Learning Repository that describes radar returns as either bouncing of metal or rocks.

It is a binary classification problem with 60 numerical input features that describe the properties of the radar return. You can learn more about this problem here: Sonar Dataset. You can see world class published results for this dataset here: Accuracy on the Sonar Dataset.

This is not a particularly difficult dataset, but is non-trivial and interesting for this example.

Let’s load the required libraries and load the dataset from the mlbench package.



Test Algorithm

We will use the popular Random Forest algorithm as the subject of our algorithm tuning.

Random Forest is not necessarily the best algorithm for this dataset, but it is a very popular algorithm and no doubt you will find tuning it a useful exercise in you own machine learning work.

When tuning an algorithm, it is important to have a good understanding of your algorithm so that you know what affect the parameters have on the model you are creating.

In this case study, we will stick to tuning two parameters, namely the mtry and the ntree parameters that have the following affect on our random forest model. There are many other parameters, but these two parameters are perhaps the most likely to have the biggest effect on your final accuracy.

Direct from the help page for the randomForest() function in R:

  • mtry: Number of variables randomly sampled as candidates at each split.
  • ntree: Number of trees to grow.

Let’s create a baseline for comparison by using the recommend defaults for each parameter and mtry=floor(sqrt(ncol(x))) or mtry=7 and ntree=500.



We can see our estimated accuracy is 81.3%.



1. Tune Using Caret

The caret package in R provides an excellent facility to tune machine learning algorithm parameters.

Not all machine learning algorithms are available in caret for tuning. The choice of parameters is left to the developers of the package, namely Max Khun. Only those algorithm parameters that have a large effect (e.g. really require tuning in Khun’s opinion) are available for tuning in caret.

As such, only mtry parameter is available in caret for tuning. The reason is its effect on the final accuracy and that it must be found empirically for a dataset.

The ntree parameter is different in that it can be as large as you like, and continues to increases the accuracy up to some point. It is less difficult or critical to tune and could be limited more by compute time available more than anything.

Random Search

One search strategy that we can use is to try random values within a range.

This can be good if we are unsure of what the value might be and we want to overcome any biases we may have for setting the parameter (like the suggested equation above).

Let’s try a random search for mtry using caret:



Note, that we are using a test harness similar to that which we would use to spot check algorithms. Both 10-fold cross-validation and 3 repeats slows down the search process, but is intended to limit and reduce overfitting on the training set. It won’t remove overfitting entirely. Holding back a validation set for final checking is a great idea if you can spare the data.



We can see that the most accurate value for mtry was 11 with an accuracy of 82.1%.

Tune Random Forest Parameters in R Using Random Search

Tune Random Forest Parameters in R Using Random Search

Grid Search

Another search is to define a grid of algorithm parameters to try.

Each axis of the grid is an algorithm parameter, and points in the grid are specific combinations of parameters. Because we are only tuning one parameter, the grid search is a linear search through a vector of candidate values.



We can see that the most accurate value for mtry was 2 with an accuracy of 83.78%.



Tune Random Forest Parameters in R Using Grid Search.png

Tune Random Forest Parameters in R Using Grid Search.png

2. Tune Using Algorithm Tools

Some algorithms provide tools for tuning the parameters of the algorithm.

For example, the random forest algorithm implementation in the randomForest package provides the tuneRF() function that searches for optimal mtry values given your data.



You can see that the most accurate value for mtry was 10 with an OOBError of 0.1442308.

This does not really match up with what we saw in the caret repeated cross validation experiment above, where mtry=10 gave an accuracy of 82.04%. Nevertheless, it is an alternate way to tune the algorithm.



Tune Random Forest Parameters in R using tuneRF

Tune Random Forest Parameters in R using tuneRF

3. Craft Your Own Parameter Search

Often you want to search for both the parameters that must be tuned (handled by caret) and the those that need to be scaled or adapted more generally for your dataset.

You have to craft your own parameter search.

Two popular options that I recommend are:

  1. Tune Manually: Write R code to create lots of models and compare their accuracy using caret
  2. Extend Caret: Create an extension to caret that adds in additional parameters to caret for the algorithm you want to tune.

Tune Manually

We want to keep using caret because it provides a direct point of comparison to our previous models (apples to apples, even the same data splits) and because of the repeated cross validation test harness that we like as it reduces the severity of overfitting.

One approach is to create many caret models for our algorithm and pass in a different parameters directly to the algorithm manually. Let’s look at an example doing this to evaluate different values for ntree while holding mtry constant.



You can see that the most accuracy value for ntree was perhaps 2000 with a mean accuracy of 82.02% (a lift over our very first experiment using the default mtry value).

The results perhaps suggest an optimal value for ntree between 2000 and 2500. Also note, we held mtry constant at the default value. We could repeat the experiment with a possible better mtry=2 from the experiment above, or try combinations of of ntree and mtry in case they have interaction effects.



Tune Random Forest Parameters in R Manually

Tune Random Forest Parameters in R Manually

Extend Caret

Another approach is to create a “new” algorithm for caret to support.

This is the same random forest algorithm you are using, only modified so that it supports multiple tuning of multiple parameters.

A risk with this approach is that the caret native support for the algorithm has additional or fancy code wrapping it that subtly but importantly changes it’s behavior. You many need to repeat prior experiments with your custom algorithm support.

We can define our own algorithm to use in caret by defining a list that contains a number of custom named elements that the caret package looks for, such as how to fit and how to predict. See below for a definition of a custom random forest algorithm for use with caret that takes both an mtry and ntree parameters.



Now, let’s make use of this custom list in our call to the caret train function, and try tuning different values for ntree and mtry.



This may take a minute or two to run.

You can see that the most accurate values for ntree and mtry were 2000 and 2 with an accuracy of 84.43%.

We do perhaps see some interaction effects between the number of trees and the value of ntree. Nevertheless, if we had chosen the best value for mtry found using grid search of 2 (above) and the best value of ntree found using grid search of 2000 (above), in this case we would have achieved the same level of tuning found in this combined search. This is a nice confirmation.



Custom Tuning of Random Forest parameters in R

Custom Tuning of Random Forest parameters in R

 

For more information on defining custom algorithms in caret see:

To see the actual wrapper for random forest used by caret that you can use as a starting point, see:

How to Get Started and Practice Machine Learning in R

Machine Learning Mastery With R Mini-Course

R is the most popular platform among professional data scientists for applied machine learning.

Why? Speed, power and hundreds of algorithms. You need to know R to kick-ass at machine learning.

Take my 14-day mini-course in Machine Learning with R.

Start Your FREE Mini-Course >> 




FREE 14-Day Mini-Course in
Machine Learning with R


You will get a PDF containing all 14 lessons.


You will also receive a lesson each day via email, encouraging you and offering you tips and tricks. And it’s all FREE!

Warning: This mini-course is fast-moving and assumes that you are a developer who can write code and that you already know a little machine learning (or can pick it up fast).


Summary

In this post you discovered the importance of tuning well performing machine learning algorithms in order to get the best performance from them.

You worked through an example of tuning the Random Forest algorithm in R and discovered three ways that you can tune a well performing algorithm.

  1. Using the caret R package.
  2. Using tools that come with the algorithm.
  3. Designing your own parameter search.

You now have a worked example and template that you can use to tune machine learning algorithms in R on your current or next project.

Next Step

Work through the example in this post.

  1. Open your R interactive environment.
  2. Type or copy-paste the sample code above.
  3. Take your time and understand what is going on, use R help to read-up on functions.

Do you have any questions? Ask in the comments and I will do my best to answer them.