Time-Delay Neural Networks

In the last post, we described how to implement Decision Trees on Embedded Systems. We ended the post by stating that while decision trees are well suited for data classification, its implementation on embedded systems is very challenging due to, more often than not, limited memory.

Time-delay neural networks (TDNN), another approach for data classification, gained momentum in the last years. It performs very well on time series and is therefore interesting for a wide range of applications, such as stock market prediction, image sequence analysis, and speech recognition. Further, it provides one significant advantage compared to decision trees: it can be more efficiently implemented on embedded systems. In the following, we will show how Matlab’s Neural Network Time Series Tool or its counterpart the timedelaynet function can be used to train time-delay neural networks and run them on embedded systems. The code discussed here can be found on GitHub.

Artificial neural networks approximate the operation of the human brain. Wikipedia has the following short and crisp introduction:

Neural networks are a computational approach which is based on a large collection of neural units loosely modeling the way a biological brain solves problems with large clusters of biological neurons connected by axons. Each neural unit is connected with many others, and links can be enforcing or inhibitory in their effect on the activation state of connected neural units. Each individual neural unit may have a summation function which combines the values of all its inputs together. There may be a threshold function or limiting function on each connection and on the unit itself such that it must surpass it before it can propagate to other neurons. These systems are self-learning and trained rather than explicitly programmed and excel in areas where the solution or feature detection is difficult to express in a traditional computer program.

Time-delay neural networks work on sequential data, e.g., time series, by augmenting the input with time-delayed copies of previous inputs:

tdnn

We use Matlab’s pollution mortality data set to show how to create and deploy a time-delay neural network. The data set’s input consists of eight measurements of the ambient environment (temperature, relative humidity, carbon monoxide, sulfur dioxide, nitrogen dioxide, hydrocarbons, ozone, particulate) and three output variables (total mortality, respiratory mortality, cardiovascular mortality). We can either use the Neural Network GUI

or command line functions to construct a time-delay neural network with ten hidden nodes and a two step delay:

% Pollution data set:
% - PollutionInputs: every input consists of 8 variables.
% - PollutionTargets: every target consists of 3 variables.
load pollution_dataset

% Training function
trainFcn = 'trainlm'; % Levenberg-Marquardt backpropagation.

% Create a time-delay network.
inputDelays = 1:2;
hiddenLayerSize = 10;
net = timedelaynet(inputDelays, hiddenLayerSize, trainFcn);

% Prepare data for training.
[x,xi,ai,t] = preparets(net, pollutionInputs, pollutionTargets);

% Setup Division of data for training, validation, testing.
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

% Train the network.
[net,tr] = train(net,x,t,xi,ai);

Next, we want to deploy the classifier on a real system and, hence, we need a C/C++ implementation of the created and trained neural network. We implement the export function extractNeuralNetwork, which takes as input the trained network net and automatically creates the class TDNNClassifier with the header files Data.h and TDNNClassifier.h and the source file TDNNClassifier.cpp:

% Create C++ implementation of the trained network.
extractNeuralNetwork(net);

Now we can use the neural network in a C/C++ project to predict pollution-induced mortality rates:

#include <iostream>
#include "Data.h"
#include "TDNNClassifier.h"

using namespace std;

static const float POLLUTION_DATASET[5][8] = {
  {72.38, 29.20, 11.51, 3.37, 9.64, 45.79, 6.69, 72.72},
  {67.19, 67.51, 8.92, 2.59, 10.05, 43.90, 6.83, 49.60},
  {62.94, 61.42, 9.48, 3.29, 7.80, 32.18, 4.98, 55.68},
  {72.49, 58.99, 10.28, 3.04, 13.39, 40.43, 9.25, 55.16},
  {74.25, 34.80, 10.57, 3.39, 11.90, 48.53, 9.15, 66.02}
};

int main(int argc, char* argv[]) {
  // Test the classifier with the first five inputs of Matlab's dataset
  // pollution_dataset, which was used to train the neural network.
  Data data;
  TDNNClassifier classifier;
  float *prediction;

  for (int d = 0; d < 5; d++) {
    for (int i = 0; i < 8; i++) {
      data.value[i] = POLLUTION_DATASET[d][i];
    }
    prediction = classifier.Predict(data);
    cout << "Prediction " << prediction[0] << ", " << prediction[1]
         << ", " << prediction[2] << endl;
  }
  return 0;
}

Check out the Github project TDNN-Matlab2Cpp for more details and the full source code.

Decision Trees on Embedded Systems

Many approaches for classification and regression deliver high accuracy but are due to their black-box nature no longer interpretable for its user, for example models produced with support vector machines (SVMs). Hence, these models do not provide us any intuitive insights why they are performing well in certain situations while miserably failing in others.

The contrary are white-box models, which let us look inside the model to find out what the model learned from the training data and how it is using the input variables to predict the value of the output variable. A class of popular white-box models are decision trees. In the following, we will have a closer look at them, more specifically at Matlab’s TreeBagger random decision forest implementation, and show how we can run the classifier on embedded systems. The code discussed here can be found on GitHub.

Decision trees are tree-like graphs. Each interior node represents one of the input variables. Edges going to the node’s children represent possible values of that input variable. The leaf nodes depict the values of the output variable given that the input variables match the values of the edges traversed from the root to the given leaf node. The following tree is an example from Akanoo, a company providing automated on-site marketing solutions. It shows the classification of web shop visitors as buyers or non buyers:

example_decision_tree

The tree shows the basic principle of a decision tree analysis. In reality the analysis will depend on a variety of additional factors, i.e., input variables. Akanoo uses for example a combination of over 50 independent variables to calculate purchasing probabilities.

Matlab’s TreeBagger function combines multiple decision trees, each using a random subset of the input variables, to increase the classification accuracy. The following example uses Fisher’s iris flower data set to show how TreeBagger is used to create 20 decision trees to predict three different flower species based on four input variables: sepal length, sepal width, petal length, and petal width.

% fisheriris data set:
%  - meas: Matrix of input variables
%  - species: vector of species
load fisheriris
num_bags=20;

% create numerical class labels
species_class(find(strcmp(species, 'setosa'))) = 1;
species_class(find(strcmp(species, 'versicolor'))) = 2;
species_class(find(strcmp(species, 'virginica'))) = 3;

% create decision trees
B = TreeBagger(num_bags, meas, species_class);

B holds the ensemble of 20 decision trees, which we can use to predict the species based on the four input variables. In the following, we use the first set of input variables in our measurement data set to predict the species.

predict(B, meas(1,:))

Each of the trees in B have around 20 nodes, even for this rather simple classification example. For more complex problems the trees easily grow to many hundreds of nodes. As long as we use the classifier within Matlab this is not really a problem. However, as soon as we want to deploy the classifier on a real system in a device, we need to re-implement the decision trees in a low-level language, such as C or C++, and export the trees trained in Matlab. Luckily, Paul Kendrick provides on GitHub the project decisionTreeMat2Cpp doing exactly what we need for this:

This program takes a decision tree trained in Matlab using TreeBagger or the classification tree function ClassificationTree and outputs a text file containing all the branch information. This text file can be read by the attached C++ class, and then used to make decisions based on presented features in deployed applications.

However, many embedded systems do not have a file system, turning it impossible to read in the text files describing the decision trees. Hence, we extend the package to directly output a header file containing all the branch information. This header file we can easily include in the C++ class, which implements the logic of the decision trees. The following Matlab command

extractDecTreeStruct(B, unique(species_class), 1, num_bags);

creates the header file decTreeConstants.h, which looks like this (only showing the first few lines of the file):

#ifndef DECTREECONSTANTS_h
#define DECTREECONSTANTS_h

const int NO_CLASSES = 3;
const int NO_BAGS = 20;

const int NO_BRANCHES[20] = {5, 10, 7, 9, 11, 6, 7, 8, 8, 5, 8, 7, 5, 10, 9, 12, 8, 6, 5, 13};

const int BRANCH_LENGTHS[20][13] = {
{1, 2, 3, 4, 4},
{2, 3, 3, 3, 3, 4, 4, 4, 5, 5},
{1, 2, 3, 4, 5, 6, 6},
{1, 3, 3, 4, 4, 5, 5, 5, 5},
{1, 3, 3, 4, 4, 5, 5, 6, 6, 6, 6},
{1, 2, 3, 4, 5, 5},
...
...

This simple C++ example illustrates how we can use then the decision tree class to classify a species based on the input variables 5.7, 2.8, 4.1, and 1.3 corresponding to sepal length, sepal width, petal length, and petal width:

#include "DecisionTreeClass.hpp"

using namespace std;

int main(int argc, char* argv[]) {

  DTree tree;

  float input[4] = {5.7f, 2.8f, 4.1f, 1.3f};
  int prediction = tree.decisionTreeFun(input);
  cout &lt;&lt; "Predicted species: " &lt;&lt; prediction &lt;&lt; endl;

  return 0;
}

Check out the Github project TreeBagger-Matlab2Cpp for more details and the full source code.

Another challenging problem not covered by this post persists. Embedded systems are chronically short on memory and, hence, may have difficulties to fit the large three dimensional arrays needed by the decision trees’ C++ implementation.