IMDB Sentiment Analysis (Simple RNN)

Biostat 203B

Author

Dr. Hua Zhou @ UCLA

Published

February 28, 2024

1 Setup

Display system information for reproducibility.

import IPython
print(IPython.sys_info())
{'commit_hash': '8b1204b6c',
 'commit_source': 'installation',
 'default_encoding': 'utf-8',
 'ipython_path': '/opt/venv/lib/python3.10/site-packages/IPython',
 'ipython_version': '8.21.0',
 'os_name': 'posix',
 'platform': 'Linux-6.6.12-linuxkit-aarch64-with-glibc2.35',
 'sys_executable': '/opt/venv/bin/python',
 'sys_platform': 'linux',
 'sys_version': '3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]'}
sessionInfo()
R version 4.3.2 (2023-10-31)
Platform: aarch64-unknown-linux-gnu (64-bit)
Running under: Ubuntu 22.04.3 LTS

Matrix products: default
BLAS:   /usr/lib/aarch64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/aarch64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so;  LAPACK version 3.10.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

time zone: Etc/UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
 [1] digest_0.6.34     fastmap_1.1.1     xfun_0.42         Matrix_1.6-1.1   
 [5] lattice_0.21-9    reticulate_1.35.0 knitr_1.45        htmltools_0.5.7  
 [9] png_0.1-8         rmarkdown_2.25    cli_3.6.2         grid_4.3.2       
[13] compiler_4.3.2    rstudioapi_0.15.0 tools_4.3.2       evaluate_0.23    
[17] Rcpp_1.0.12       yaml_2.3.8        rlang_1.1.3       jsonlite_1.8.8   
[21] htmlwidgets_1.6.4

Load libraries.

# Plotting tool
import matplotlib.pyplot as plt
# Load Tensorflow and Keras
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
library(keras)

2 Prepare data

From documentation:

Dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a sequence of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer “3” encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: “only consider the top 10,000 most common words, but eliminate the top 20 most common words”.

Retrieve IMDB data:

max_features = 10000 # to be consistent with lasso example
# Cut texts after this number of words (among top max_features most common words)
maxlen = 80
batch_size = 32

print('Loading data...')
Loading data...
(x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(
  num_words = max_features
  )

Sizes of training and test sets:

print(len(x_train), 'train sequences')
25000 train sequences
print(len(x_test), 'test sequences')
25000 test sequences

We pad texts to maxlen=80 words.

print('Pad sequences (samples x time)')
Pad sequences (samples x time)
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen = maxlen)
x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen = maxlen)
print('x_train shape:', x_train.shape)
x_train shape: (25000, 80)
print('x_test shape:', x_test.shape)
x_test shape: (25000, 80)
max_features <- 10000 # to be consistent with lasso example

# Cut texts after this number of words (among top max_features most common words)
maxlen <- 80  

cat('Loading data...\n')
Loading data...
imdb <- dataset_imdb(num_words = max_features)
imdb$train$x[[1]]
  [1]    1   14   22   16   43  530  973 1622 1385   65  458 4468   66 3941    4
 [16]  173   36  256    5   25  100   43  838  112   50  670    2    9   35  480
 [31]  284    5  150    4  172  112  167    2  336  385   39    4  172 4536 1111
 [46]   17  546   38   13  447    4  192   50   16    6  147 2025   19   14   22
 [61]    4 1920 4613  469    4   22   71   87   12   16   43  530   38   76   15
 [76]   13 1247    4   22   17  515   17   12   16  626   18    2    5   62  386
 [91]   12    8  316    8  106    5    4 2223 5244   16  480   66 3785   33    4
[106]  130   12   16   38  619    5   25  124   51   36  135   48   25 1415   33
[121]    6   22   12  215   28   77   52    5   14  407   16   82    2    8    4
[136]  107  117 5952   15  256    4    2    7 3766    5  723   36   71   43  530
[151]  476   26  400  317   46    7    4    2 1029   13  104   88    4  381   15
[166]  297   98   32 2071   56   26  141    6  194 7486   18    4  226   22   21
[181]  134  476   26  480    5  144   30 5535   18   51   36   28  224   92   25
[196]  104    4  226   65   16   38 1334   88   12   16  283    5   16 4472  113
[211]  103   32   15   16 5345   19  178   32
imdb$train$y[[1]]
[1] 1

Sizes of training and test sets:

x_train <- imdb$train$x
y_train <- imdb$train$y
x_test <- imdb$test$x
y_test <- imdb$test$y

cat(length(x_train), 'train sequences\n')
25000 train sequences
cat(length(x_test), 'test sequences\n')
25000 test sequences

We pad texts to maxlen=80 words.

cat('Pad sequences (samples x time)\n')
Pad sequences (samples x time)
x_train <- pad_sequences(x_train, maxlen = maxlen)
x_test <- pad_sequences(x_test, maxlen = maxlen)
cat('x_train shape:', dim(x_train), '\n')
x_train shape: 25000 80 
cat('x_test shape:', dim(x_test), '\n')
x_test shape: 25000 80 

3 Build model

model = keras.Sequential([
  layers.Embedding(max_features, 128),
  layers.SimpleRNN(units = 64),
  layers.Dense(units = 1, activation = 'sigmoid')
])

# try using different optimizers and different optimizer configs
model.compile(
  loss = 'binary_crossentropy',
  optimizer = 'adam',
  metrics = ['accuracy']
)

model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 embedding (Embedding)       (None, None, 128)         1280000   
                                                                 
 simple_rnn (SimpleRNN)      (None, 64)                12352     
                                                                 
 dense (Dense)               (None, 1)                 65        
                                                                 
=================================================================
Total params: 1292417 (4.93 MB)
Trainable params: 1292417 (4.93 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
model <- keras_model_sequential()
model %>%
  layer_embedding(input_dim = max_features, output_dim = 128) %>% 
  layer_simple_rnn(units = 64) %>% 
  layer_dense(units = 1, activation = 'sigmoid')

# Try using different optimizers and different optimizer configs
model %>% compile(
  loss = 'binary_crossentropy',
  optimizer = 'adam',
  metrics = c('accuracy')
)

summary(model)
Model: "sequential_1"
________________________________________________________________________________
 Layer (type)                       Output Shape                    Param #     
================================================================================
 embedding_1 (Embedding)            (None, None, 128)               1280000     
 simple_rnn_1 (SimpleRNN)           (None, 64)                      12352       
 dense_1 (Dense)                    (None, 1)                       65          
================================================================================
Total params: 1292417 (4.93 MB)
Trainable params: 1292417 (4.93 MB)
Non-trainable params: 0 (0.00 Byte)
________________________________________________________________________________

4 Training

print('Train...')
Train...
history = model.fit(
  x_train, y_train,
  batch_size = batch_size,
  epochs = 15,
  validation_split = 0.2, 
  verbose = 2 # one line per epoch
)
Epoch 1/15
625/625 - 7s - loss: 0.5423 - accuracy: 0.7051 - val_loss: 0.4380 - val_accuracy: 0.7958 - 7s/epoch - 11ms/step
Epoch 2/15
625/625 - 6s - loss: 0.3196 - accuracy: 0.8671 - val_loss: 0.4553 - val_accuracy: 0.7850 - 6s/epoch - 10ms/step
Epoch 3/15
625/625 - 6s - loss: 0.1791 - accuracy: 0.9319 - val_loss: 0.5723 - val_accuracy: 0.7962 - 6s/epoch - 10ms/step
Epoch 4/15
625/625 - 6s - loss: 0.0936 - accuracy: 0.9675 - val_loss: 0.7204 - val_accuracy: 0.7492 - 6s/epoch - 10ms/step
Epoch 5/15
625/625 - 6s - loss: 0.0463 - accuracy: 0.9851 - val_loss: 0.8214 - val_accuracy: 0.7828 - 6s/epoch - 10ms/step
Epoch 6/15
625/625 - 6s - loss: 0.0529 - accuracy: 0.9814 - val_loss: 0.8401 - val_accuracy: 0.7826 - 6s/epoch - 10ms/step
Epoch 7/15
625/625 - 6s - loss: 0.0288 - accuracy: 0.9908 - val_loss: 1.0047 - val_accuracy: 0.7708 - 6s/epoch - 10ms/step
Epoch 8/15
625/625 - 6s - loss: 0.0116 - accuracy: 0.9970 - val_loss: 1.1062 - val_accuracy: 0.7852 - 6s/epoch - 10ms/step
Epoch 9/15
625/625 - 6s - loss: 0.0387 - accuracy: 0.9863 - val_loss: 1.0947 - val_accuracy: 0.7524 - 6s/epoch - 10ms/step
Epoch 10/15
625/625 - 6s - loss: 0.0356 - accuracy: 0.9868 - val_loss: 1.1266 - val_accuracy: 0.7352 - 6s/epoch - 10ms/step
Epoch 11/15
625/625 - 6s - loss: 0.0215 - accuracy: 0.9928 - val_loss: 1.2244 - val_accuracy: 0.7410 - 6s/epoch - 10ms/step
Epoch 12/15
625/625 - 6s - loss: 0.0059 - accuracy: 0.9985 - val_loss: 1.2951 - val_accuracy: 0.7544 - 6s/epoch - 10ms/step
Epoch 13/15
625/625 - 6s - loss: 0.0189 - accuracy: 0.9939 - val_loss: 1.2179 - val_accuracy: 0.7654 - 6s/epoch - 10ms/step
Epoch 14/15
625/625 - 6s - loss: 0.0537 - accuracy: 0.9812 - val_loss: 1.1281 - val_accuracy: 0.7490 - 6s/epoch - 10ms/step
Epoch 15/15
625/625 - 6s - loss: 0.0189 - accuracy: 0.9937 - val_loss: 1.5583 - val_accuracy: 0.6582 - 6s/epoch - 10ms/step

Visualize training process:

plt.figure()
plt.ylabel("Loss (training and validation)")
plt.xlabel("Training Epochs")
plt.ylim([0, 2])
(0.0, 2.0)
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.show()

plt.figure()
plt.ylabel("Accuracy (training and validation)")
plt.xlabel("Training Epochs")
plt.ylim([0, 1])
(0.0, 1.0)
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
plt.show()

batch_size <- 32

cat('Train...\n')
Train...
system.time({
history <- model %>% fit(
  x_train, y_train,
  batch_size = batch_size,
  epochs = 15,
  validation_split = 0.2,
  verbose = 2
)
})
Epoch 1/15
625/625 - 7s - loss: 0.5897 - accuracy: 0.6669 - val_loss: 0.4819 - val_accuracy: 0.7708 - 7s/epoch - 11ms/step
Epoch 2/15
625/625 - 6s - loss: 0.3520 - accuracy: 0.8482 - val_loss: 0.4884 - val_accuracy: 0.7822 - 6s/epoch - 10ms/step
Epoch 3/15
625/625 - 6s - loss: 0.1550 - accuracy: 0.9445 - val_loss: 0.6192 - val_accuracy: 0.7752 - 6s/epoch - 10ms/step
Epoch 4/15
625/625 - 6s - loss: 0.0760 - accuracy: 0.9735 - val_loss: 0.8027 - val_accuracy: 0.7394 - 6s/epoch - 10ms/step
Epoch 5/15
625/625 - 6s - loss: 0.0334 - accuracy: 0.9887 - val_loss: 0.8977 - val_accuracy: 0.7686 - 6s/epoch - 10ms/step
Epoch 6/15
625/625 - 6s - loss: 0.0165 - accuracy: 0.9944 - val_loss: 1.2323 - val_accuracy: 0.6840 - 6s/epoch - 10ms/step
Epoch 7/15
625/625 - 6s - loss: 0.0513 - accuracy: 0.9803 - val_loss: 0.9906 - val_accuracy: 0.7502 - 6s/epoch - 10ms/step
Epoch 8/15
625/625 - 6s - loss: 0.0292 - accuracy: 0.9904 - val_loss: 1.0519 - val_accuracy: 0.7636 - 6s/epoch - 10ms/step
Epoch 9/15
625/625 - 7s - loss: 0.0381 - accuracy: 0.9870 - val_loss: 1.1305 - val_accuracy: 0.7304 - 7s/epoch - 11ms/step
Epoch 10/15
625/625 - 6s - loss: 0.0100 - accuracy: 0.9969 - val_loss: 1.3710 - val_accuracy: 0.7014 - 6s/epoch - 10ms/step
Epoch 11/15
625/625 - 7s - loss: 0.0055 - accuracy: 0.9984 - val_loss: 1.2613 - val_accuracy: 0.7652 - 7s/epoch - 11ms/step
Epoch 12/15
625/625 - 7s - loss: 0.0376 - accuracy: 0.9872 - val_loss: 1.4155 - val_accuracy: 0.6672 - 7s/epoch - 10ms/step
Epoch 13/15
625/625 - 6s - loss: 0.0283 - accuracy: 0.9902 - val_loss: 1.2157 - val_accuracy: 0.7604 - 6s/epoch - 10ms/step
Epoch 14/15
625/625 - 7s - loss: 0.0069 - accuracy: 0.9979 - val_loss: 1.3645 - val_accuracy: 0.7444 - 7s/epoch - 11ms/step
Epoch 15/15
625/625 - 6s - loss: 0.0011 - accuracy: 0.9998 - val_loss: 1.4507 - val_accuracy: 0.7428 - 6s/epoch - 10ms/step
   user  system elapsed 
165.901  22.783  97.143 

Visualize training process:

plot(history)

5 Testing

score, acc = model.evaluate(
  x_test, y_test,
  batch_size = batch_size,
  verbose = 2
)
782/782 - 2s - loss: 1.5341 - accuracy: 0.6565 - 2s/epoch - 2ms/step
print('Test score:', score)
Test score: 1.534104347229004
print('Test accuracy:', acc)
Test accuracy: 0.6565200090408325
scores <- model %>% evaluate(
  x_test, y_test,
  batch_size = batch_size
)
782/782 - 2s - loss: 1.4382 - accuracy: 0.7388 - 2s/epoch - 2ms/step
cat('Test score:', scores[[1]])
Test score: 1.438163
cat('Test accuracy', scores[[2]])
Test accuracy 0.73884