How to Develop a Deep Learning Bag-of-Words Model for Predicting Movie Review Sentiment

Movie reviews can be classified as either favorable or not.

The evaluation of movie review text is a classification problem often called sentiment analysis. A popular technique for developing sentiment analysis models is to use a bag-of-words model that transforms documents into vectors where each word in the document is assigned a score.

In this tutorial, you will discover how you can develop a deep learning predictive model using the bag-of-words representation for movie review sentiment classification.

After completing this tutorial, you will know:

  • How to prepare the review text data for modeling with a restricted vocabulary.
  • How to use the bag-of-words model to prepare train and test data.
  • How to develop a multilayer Perceptron bag-of-words model and use it to make predictions on new review text data.

Let’s get started.

How to Develop a Deep Learning Bag-of-Words Model for Predicting Sentiment in Movie Reviews
Photo by jai Mansson, some rights reserved.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

  1. Movie Review Dataset
  2. Data Preparation
  3. Bag-of-Words Representation
  4. Sentiment Analysis Models

Movie Review Dataset

The Movie Review Data is a collection of movie reviews retrieved from the imdb.com website in the early 2000s by Bo Pang and Lillian Lee. The reviews were collected and made available as part of their research on natural language processing.

The reviews were originally released in 2002, but an updated and cleaned up version were released in 2004, referred to as “v2.0”.

The dataset is comprised of 1,000 positive and 1,000 negative movie reviews drawn from an archive of the rec.arts.movies.reviews newsgroup hosted at imdb.com. The authors refer to this dataset as the “polarity dataset”.

Our data contains 1000 positive and 1000 negative reviews all written before 2002, with a cap of 20 reviews per author (312 authors total) per category. We refer to this corpus as the polarity dataset.

— A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts, 2004.

The data has been cleaned up somewhat, for example:

  • The dataset is comprised of only English reviews.
  • All text has been converted to lowercase.
  • There is white space around punctuation like periods, commas, and brackets.
  • Text has been split into one sentence per line.

The data has been used for a few related natural language processing tasks. For classification, the performance of classical models (such as Support Vector Machines) on the data is in the range of high 70% to low 80% (e.g. 78%-82%).

More sophisticated data preparation may see results as high as 86% with 10-fold cross validation. This gives us a ballpark of low-to-mid 80s if we were looking to use this dataset in experiments on modern methods.

… depending on choice of downstream polarity classifier, we can achieve highly statistically significant improvement (from 82.8% to 86.4%)

— A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts, 2004.

You can download the dataset from here:

After unzipping the file, you will have a directory called “txt_sentoken” with two sub-directories containing the text “neg” and “pos” for negative and positive reviews. Reviews are stored one per file with a naming convention cv000 to cv999 for each neg and pos.

Next, let’s look at loading and preparing the text data.

Data Preparation

In this section, we will look at 3 things:

  1. Separation of data into training and test sets.
  2. Loading and cleaning the data to remove punctuation and numbers.
  3. Defining a vocabulary of preferred words.

Split into Train and Test Sets

We are pretending that we are developing a system that can predict the sentiment of a textual movie review as either positive or negative.

This means that after the model is developed, we will need to make predictions on new textual reviews. This will require all of the same data preparation to be performed on those new reviews as is performed on the training data for the model.

We will ensure that this constraint is built into the evaluation of our models by splitting the training and test datasets prior to any data preparation. This means that any knowledge in the test set that could help us better prepare the data (e.g. the words used) is unavailable during the preparation of data and the training of the model.

That being said, we will use the last 100 positive reviews and the last 100 negative reviews as a test set (100 reviews) and the remaining 1,800 reviews as the training dataset.

This is a 90% train, 10% split of the data.

The split can be imposed easily by using the filenames of the reviews where reviews named 000 to 899 are for training data and reviews named 900 onwards are for testing the model.

Loading and Cleaning Reviews

The text data is already pretty clean, so not much preparation is required.

Without getting too much into the details, we will prepare the data using the following method:

  • Split tokens on white space.
  • Remove all punctuation from words.
  • Remove all words that are not purely comprised of alphabetical characters.
  • Remove all words that are known stop words.
  • Remove all words that have a length <= 1 character.

We can put all of these steps into a function called clean_doc() that takes as an argument the raw text loaded from a file and returns a list of cleaned tokens. We can also define a function load_doc() that loads a document from file ready for use with the clean_doc() function.

An example of cleaning the first positive review is listed below.



Running the example prints a long list of clean tokens.

There are many more cleaning steps we may want to explore, and I leave them as further exercises. I’d love to see what you can come up with.



Define a Vocabulary

It is important to define a vocabulary of known words when using a bag-of-words model.

The more words, the larger the representation of documents, therefore it is important to constrain the words to only those believed to be predictive. This is difficult to know beforehand and often it is important to test different hypotheses about how to construct a useful vocabulary.

We have already seen how we can remove punctuation and numbers from the vocabulary in the previous section. We can repeat this for all documents and build a set of all known words.

We can develop a vocabulary as a Counter, which is a dictionary mapping of words and their count that allows us to easily update and query.

Each document can be added to the counter (a new function called add_doc_to_vocab()) and we can step over all of the reviews in the negative directory and then the positive directory (a new function called process_docs()).

The complete example is listed below.



Running the example shows that we have a vocabulary of 44,276 words.

We also can see a sample of the top 50 most used words in the movie reviews.

Note that this vocabulary was constructed based on only those reviews in the training dataset.



We can step through the vocabulary and remove all words that have a low occurrence, such as only being used once or twice in all reviews.

For example, the following snippet will retrieve only the tokens that appear 2 or more times in all reviews.



Running the above example with this addition shows that the vocabulary size drops by a little more than half its size, from 44,276 to 25,767 words.




Finally, the vocabulary can be saved to a new file called vocab.txt that we can later load and use to filter movie reviews prior to encoding them for modeling. We define a new function called save_list() that saves the vocabulary to file, with one word per file.

For example:



Running the min occurrence filter on the vocabulary and saving it to file, you should now have a new file called vocab.txt with only the words we are interested in.

The order of words in your file will differ, but should look something like the following:



We are now ready to look at extracting features from the reviews ready for modeling.

Bag-of-Words Representation

In this section, we will look at how we can convert each review into a representation that we can provide to a Multilayer Perceptron model.

A bag-of-words model is a way of extracting features from text so the text input can be used with machine learning algorithms like neural networks.

Each document, in this case a review, is converted into a vector representation. The number of items in the vector representing a document corresponds to the number of words in the vocabulary. The larger the vocabulary, the longer the vector representation, hence the preference for smaller vocabularies in the previous section.

Words in a document are scored and the scores are placed in the corresponding location in the representation. We will look at different word scoring methods in the next section.

In this section, we are concerned with converting reviews into vectors ready for training a first neural network model.

This section is divided into 2 steps:

  1. Converting reviews to lines of tokens.
  2. Encoding reviews with a bag-of-words model representation.

Reviews to Lines of Tokens

Before we can convert reviews to vectors for modeling, we must first clean them up.

This involves loading them, performing the cleaning operation developed above, filtering out words not in the chosen vocabulary, and converting the remaining tokens into a single string or line ready for encoding.

First, we need a function to prepare one document. Below lists the function doc_to_line() that will load a document, clean it, filter out tokens not in the vocabulary, then return the document as a string of white space separated tokens.



Next, we need a function to work through all documents in a directory (such as ‘pos‘ and ‘neg‘) to convert the documents into lines.

Below lists the process_docs() function that does just this, expecting a directory name and a vocabulary set as input arguments and returning a list of processed documents.



Finally, we need to load the vocabulary and turn it into a set for use in cleaning reviews.



We can put all of this together, reusing the loading and cleaning functions developed in previous sections.

The complete example is listed below, demonstrating how to prepare the positive and negative reviews from the training dataset.



Movie Reviews to Bag-of-Words Vectors

We will use the Keras API to convert reviews to encoded document vectors.

Keras provides the Tokenize class that can do some of the cleaning and vocab definition tasks that we took care of in the previous section.

It is better to do this ourselves to know exactly what was done and why. Nevertheless, the Tokenizer class is convenient and will easily transform documents into encoded vectors.

First, the Tokenizer must be created, then fit on the text documents in the training dataset.

In this case, these are the aggregation of the positive_lines and negative_lines arrays developed in the previous section.



This process determines a consistent way to convert the vocabulary to a fixed-length vector with 25,768 elements, which is the total number of words in the vocabulary file vocab.txt.

Next, documents can then be encoded using the Tokenizer by calling texts_to_matrix(). The function takes both a list of documents to encode and an encoding mode, which is the method used to score words in the document. Here we specify ‘freq‘ to score words based on their frequency in the document.

This can be used to encode the training data, for example:



This encodes all of the positive and negative reviews in the training dataset and prints the shape of the resulting matrix as 1,800 documents each with the length of 25,768 elements. It is ready to use as training data for a model.




We can encode the test data in a similar way.

First, the process_docs() function from the previous section needs to be modified to only process reviews in the test dataset, not the training dataset.

We support the loading of both the training and test datasets by adding an is_trian argument and using that to decide what review file names to skip.



Next, we can load and encode positive and negative reviews in the test set in the same way as we did for the training set.



We can put all of this together in a single example.



Running the example prints both the shape of the encoded training dataset and test dataset with 1,800 and 200 documents respectively, each with the same sized encoding vocabulary (vector length).



Sentiment Analysis Models

In this section, we will develop Multilayer Perceptron (MLP) models to classify encoded documents as either positive or negative.

The models will be simple feedforward network models with fully connected layers called Dense in the Keras deep learning library.

This section is divided into 3 sections:

  1. First sentiment analysis model
  2. Comparing word scoring modes
  3. Making a prediction for new reviews

First Sentiment Analysis Model

We can develop a simple MLP model to predict the sentiment of encoded reviews.

The model will have an input layer that equals the number of words in the vocabulary, and in turn the length of the input documents.

We can store this in a new variable called n_words, as follows:




We also need class labels for all of the training and test review data. We loaded and encoded these the reviews deterministically (negative, then positive), so we can specify the labels directly, as follows:



We can now define the network.

All model configuration was found with very little trial and error and should not be considered tuned for this problem.

We will use a single hidden layer with 50 neurons and a rectified linear activation function. The output layer is a single neuron with a sigmoid activation function for predicting 0 for negative and 1 for positive reviews.

The network will be trained using the efficient Adam implementation of gradient descent and the binary cross entropy loss function, suited to binary classification problems. We will keep track of accuracy when training and evaluating the model.



Next, we can fit the model on the training data; in this case, the model is small and is easily fit in 50 epochs.



Finally, once the model is trained, we can evaluate its performance by making predictions in the test dataset and printing the accuracy.



The complete example is listed below.



Running the example, we can see that the model easily fits the training data within the 50 epochs, achieving 100% accuracy.

Evaluating the model on the test dataset, we can see that model does well, achieving an accuracy of above 90%, well within the ballpark of low-to-mid 80s seen in the original paper.

Although, it is important to note that this is not an apples-to-apples comparison, as the original paper used 10-fold cross validation to estimate model skill instead of a single train/test split.



Next, let’s look at testing different word scoring methods for the bag-of-words model.

Comparing Word Scoring Methods

The texts_to_matrix() function for the Tokenizer in the Keras API provides 4 different methods for scoring words; they are:

  • binary” Where words are marked as present (1) or absent (0).
  • count” Where the occurrence count for each word is marked as an integer.
  • tfidf” Where each word is scored based on their frequency, where words that are common across all documents are penalized.
  • freq” Where words are scored based on their frequency of occurrence within the document.

We can evaluate the skill of the model developed in the previous section fit using each of the 4 supported word scoring modes.

This first involves the development of a function to create an encoding of the loaded documents based on a chosen scoring model. The function creates the tokenizer, fits it on the training documents, then creates the train and test encodings using the chosen model. The function prepare_data() implements this behavior given lists of train and test documents.



We also need a function to evaluate the MLP given a specific encoding of the data.

Because neural networks are stochastic, they can produce different results when the same model is fit on the same data. This is mainly because of the random initial weights and the shuffling of patterns during mini-batch gradient descent. This means that any one scoring of a model is unreliable and we should estimate model skill based on an average of multiple runs.

The function below, named evaluate_mode(), takes encoded documents and evaluates the MLP by training it on the train set and estimating skill on the test set 30 times and returns a list of the accuracy scores across all of these runs.



We are now ready to evaluate the performance of the 4 different word scoring methods.

Pulling all of this together, the complete example is listed below.



Running the example may take a while (about an hour on modern hardware with CPUs, not GPUs).

At the end of the run, summary statistics for each word scoring method are provided, summarizing the distribution of model skill scores across each of the 30 runs per mode.

We can see that the mean score of both the ‘freq‘ and ‘binary‘ methods appear to be better than ‘count‘ and ‘tfidf‘.



A box and whisker plot of the results is also presented, summarizing the accuracy distributions per configuration.

We can see that the distribution for the ‘freq’ configuration is tight, which is encouraging given that it is also well performing. Additionally, we can see that ‘binary’ achieved the best results with a modest spread and might be the preferred approach for this dataset.

Box and Whisker Plot for Model Accuracy with Different Word Scoring Methods

Box and Whisker Plot for Model Accuracy with Different Word Scoring Methods

Making a Prediction for New Reviews

Finally, we can use the final model to make predictions for new textual reviews.

This is why we wanted the model in the first place.

Predicting the sentiment of new reviews involves following the same steps used to prepare the test data. Specifically, loading the text, cleaning the document, filtering tokens by the chosen vocabulary, converting the remaining tokens to a line, encoding it using the Tokenizer, and making a prediction.

We can make a prediction of a class value directly with the fit model by calling predict_classes() that will return an integer of 0 for a negative review and 1 for a positive review.

All of these steps can be put into a new function called predict_sentiment() that requires the review text, the vocabulary, the tokenizer, and the fit model, as follows:



We can now make predictions for new review texts.

Below is an example with both a clearly positive and a clearly negative review using the simple MLP developed above with the frequency word scoring mode.



Running the example correctly classifies these reviews.




Ideally, we would fit the model on all available data (train and test) to create a final model and save the model and tokenizer to file so that they can be loaded and used in new software.

Extensions

This section lists some extensions if you are looking to get more out of this tutorial.

  • Manage Vocabulary. Explore using a larger or smaller vocabulary. Perhaps you can get better performance with a smaller set of words.
  • Tune the Network Topology. Explore alternate network topologies such as deeper or wider networks. Perhaps you can get better performance with a more suited network.
  • Use Regularization. Explore the use of regularization techniques, such as dropout. Perhaps you can delay the convergence of the model and achieve better test set performance.

Further Reading

This section provides more resources on the topic if you are looking go deeper.

Dataset

APIs

Summary

In this tutorial, you discovered how to develop a bag-of-words model for predicting the sentiment of movie reviews.

Specifically, you learned:

  • How to prepare the review text data for modeling with a restricted vocabulary.
  • How to use the bag-of-words model to prepare train and test data.
  • How to develop a multilayer Perceptron bag-of-words model and use it to make predictions on new review text data.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.