

Time series classification with Tensorflow
ModelingPlatformsPredictive AnalyticsTools & LanguagesTensorFlow|Time Seriesposted by Burak Himmetoglu August 28, 2017 Burak Himmetoglu

Time-series data arise in many fields including finance, signal processing, speech recognition and medicine. A standard approach to time-series problems usually requires manual engineering of features which can then be fed into a machine learning algorithm. Engineering of features generally requires some domain knowledge of the discipline where the data has originated from. For example, if one is dealing with signals (i.e. classification of EEG signals), then possible features would involve power spectra at various frequency bands, Hjorth parameters and several other specialized statistical properties.
A similar situation arises in image classification, where manually engineered features (obtained by applying a number of filters) could be used in classification algorithms. However, with the advent of deep learning, it has been shown that convolutional neural networks (CNN) can outperform this strategy. A CNN does not require any manual engineering of features. During training, the CNN learns lots of “filters” with increasing complexity as the layers get deeper, and uses them in a final classifier.
In this blog post, I will discuss the use of deep leaning methods to classify time-series data, without the need to manually engineer features. The example I will consider is the classic Human Activity Recognition (HAR) dataset from the UCI repository. The dataset contains the raw time-series data, as well as a pre-processed one with 561 engineered features. I will compare the performance of typical machine learning algorithms which use engineered features with two deep learning methods (convolutional and recurrent neural networks) and show that deep learning can surpass the performance of the former.
I have used Tensorflow for the implementation and training of the models discussed in this post. In the discussion below, code snippets are provided to explain the implementation. For the complete code, please see my Github repository.
Convolutional Neural Networks (CNN)
The first step is to cast the data in a numpy array with shape (batch_size, seq_len, n_channels)
where batch_size
is the number of examples in a batch during training. seq_len
is the length of the sequence in time-series (128 in our case) and n_channels
is the number of channels where measurements are made. There are 9 channels in this case, which include 3 different acceleration measurements for each 3 coordinate axes. There are 6 classes of activities where each observation belong to: LAYING, STANDING, SITTING, WALKING_DOWNSTAIRS, WALKING_UPSTAIRS, WALKING
.
First, we construct placeholders for the inputs to our computational graph:
1
2
3
4
5
6
7
8
|
graph = tf.Graph() with graph.as_default(): inputs_ = tf.placeholder(tf.float32, [ None , seq_len, n_channels], name = 'inputs' ) labels_ = tf.placeholder(tf.float32, [ None , n_classes], name = 'labels' ) keep_prob_ = tf.placeholder(tf.float32, name = 'keep' ) learning_rate_ = tf.placeholder(tf.float32, name = 'learning_rate' ) |
where inputs_
are input tensors to be fed into the graph whose first dimension is kept at None
to allow for variable batch sizes. labels_
are the one-hot encoded labels to be predicted, keep_prob_
is the keep probability used in dropout regularization to prevent overfitting, and learning_rate_
is the learning rate used in Adam optimizer.
The convolutional layers are constructed using one-dimensional kernels that move through the sequence (unlike images where 2d convolutions are used). These kernels act as filters which are being learned during training. As in many CNN architectures, the deeper the layers get, the higher the number of filters become. Each convolution is followed by pooling layers to reduce the sequence length. Below is a simple picture of a possible CNN architecture that can be used:
The convolutional layers depicted above are implemented as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
with graph.as_default(): # (batch, 128, 9) -> (batch, 32, 18) conv1 = tf.layers.conv1d(inputs = inputs_, filters = 18 , kernel_size = 2 , strides = 1 , padding = 'same' , activation = tf.nn.relu) max_pool_1 = tf.layers.max_pooling1d(inputs = conv1, pool_size = 4 , strides = 4 , padding = 'same' ) # (batch, 32, 18) -> (batch, 8, 36) conv2 = tf.layers.conv1d(inputs = max_pool_1, filters = 36 , kernel_size = 2 , strides = 1 , padding = 'same' , activation = tf.nn.relu) max_pool_2 = tf.layers.max_pooling1d(inputs = conv2, pool_size = 4 , strides = 4 , padding = 'same' ) # (batch, 8, 36) -> (batch, 2, 72) conv3 = tf.layers.conv1d(inputs = max_pool_2, filters = 72 , kernel_size = 2 , strides = 1 , padding = 'same' , activation = tf.nn.relu) max_pool_3 = tf.layers.max_pooling1d(inputs = conv3, pool_size = 4 , strides = 4 , padding = 'same' ) |
Once the last layer is reached, we need to flatten the tensor and feed it to a classifier with the right number of neurons (144 in the above picture). Then, the classifier outputs logits
, which are used in two instances:
- Computing the softmax cross entropy, which is a standard loss measure used in multi-class problems.
- Predicting class labels from the maximum probability as well as the accuracy.
These are implemented as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
with graph.as_default(): # Flatten and add dropout flat = tf.reshape(max_pool_3, ( - 1 , 2 * 72 )) flat = tf.nn.dropout(flat, keep_prob = keep_prob_) # Predictions logits = tf.layers.dense(flat, n_classes) # Cost function and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels_)) optimizer = tf.train.AdamOptimizer(learning_rate_).minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1 ), tf.argmax(labels_, 1 )) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name = 'accuracy' ) |
The rest of the implementation is pretty typical, and involve feeding the graph with batches of training data and evaluating the performance on a validation set. Finally, the trained model is evaluated on the test set. With the above architecture and a batch_size
of 600, learning_rate
of 0.001 (default value), keep_prob
of 0.5, and at 500 epochs, we obtain a test accuracy of 98%. The plot below shows how the training/validation accuracy evolves through the epochs:
Long-Short-Term Memory Networks (LSTM)
LSTMs are quite popular in dealing with text based data, and has been quite successful in sentiment analysis, language translation and text generation. Since this problem also involves a sequence of similar sorts, an LSTM is a great candidate to be tried.
Below is an example architecture which can be used in our problem:
To feed the data into the network, we need to split our array into 128 pieces (one for each entry of the sequence that goes into an LSTM cell) each of shape (batch_size, n_channels)
. Then, a single layer of neurons will transform these inputs to be fed into the LSTM cells, each with the dimension lstm_size
. This size parameter is chosen to be larger than the number of channels. This is in a way similar to embedding layers in text applications where words are embedded as vectors from a given vocabulary. Then, one needs to pick the number of LSTM layers (lstm_layers
), which I have set to 2.
For the implementation, the placeholders are the same as above. The below code snippet implements the LSTM layers:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
with graph.as_default(): # Construct the LSTM inputs and LSTM cells lstm_in = tf.transpose(inputs_, [ 1 , 0 , 2 ]) # reshape into (seq_len, N, channels) lstm_in = tf.reshape(lstm_in, [ - 1 , n_channels]) # Now (seq_len*N, n_channels) # To cells lstm_in = tf.layers.dense(lstm_in, lstm_size, activation = None ) # Open up the tensor into a list of seq_len pieces lstm_in = tf.split(lstm_in, seq_len, 0 ) # Add LSTM layers lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob_) cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) initial_state = cell.zero_state(batch_size, tf.float32) |
There is an important technical detail in the above snippet. I reshaped the array from (batch_size, seq_len, n_channels)
to (seq_len, batch_size, n_channels)
first, so that tf.split
would properly split the data (by the zeroth index) into a list of (batch_size, lstm_size)
arrays at each step. The rest is pretty standard for LSTM implementations, involving construction of layers (including dropout for regularization) and then an initial state.
The next step is to implement the forward pass through the network and the cost function. One important technical aspect is that I included gradient clipping since it improves training by preventing exploding gradients during back propagation. Here is what the code looks like
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
with graph.as_default(): outputs, final_state = tf.contrib.rnn.static_rnn(cell, lstm_in, dtype = tf.float32, initial_state = initial_state) # We only need the last output tensor to pass into a classifier logits = tf.layers.dense(outputs[ - 1 ], n_classes, name = 'logits' ) # Cost function and optimizer
|