ml – Eloquent Arduino Blog http://eloquentarduino.github.io/ Machine learning on Arduino, programming & electronics Thu, 10 Dec 2020 11:26:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.6 Decision Tree, Random Forest and XGBoost on Arduino https://eloquentarduino.github.io/2020/10/decision-tree-random-forest-and-xgboost-on-arduino/ Mon, 19 Oct 2020 17:31:02 +0000 https://eloquentarduino.github.io/?p=1264 You will be surprised by how much accuracy you can achieve in just a few kylobytes of resources: Decision Tree, Random Forest and XGBoost (Extreme Gradient Boosting) are now available on your microcontrollers: highly RAM-optmized implementations for super-fast classification on embedded devices. Decision Tree Decision Tree is without doubt one of the most well-known classification […]

L'articolo Decision Tree, Random Forest and XGBoost on Arduino proviene da Eloquent Arduino Blog.

]]>
You will be surprised by how much accuracy you can achieve in just a few kylobytes of resources: Decision Tree, Random Forest and XGBoost (Extreme Gradient Boosting) are now available on your microcontrollers: highly RAM-optmized implementations for super-fast classification on embedded devices.

DecisionTree

Decision Tree

Decision Tree is without doubt one of the most well-known classification algorithms out there. It is so simple to understand that it was probably the first classifier you encountered in any Machine Learning course.

I won't go into the details of how a Decision Tree classifier trains and selects the splits for the input features: here I will explain how a RAM-efficient porting of such a classifier is implemented.

To an introduction visit Wikipedia; for a more in-depth guide visit KDNuggets.

Since we're willing to sacrifice program space (a.k.a flash) in favor of memory (a.k.a RAM), because RAM is the most scarce resource in the vast majority of microcontrollers, the smart way to port a Decision Tree classifier from Python to C is "hard-coding" the splits in code, without keeping any reference to them into variables.

Here's what it looks like for a Decision tree that classifies the Iris dataset.

As you can see, we're using 0 bytes of RAM to get the classification result, since no variable is being allocated. On the other side, the program space will grow almost linearly with the number of splits.

Since program space is often much greater than RAM on microcontrollers, this implementation exploits its abundance to be able to deploy larger models. How much large? It will depend on the flash size available: many new generations board (Arduino Nano 33 BLE Sense, ESP32, ST Nucleus...) have 1 Mb of flash, which will hold tens of thousands of splits.

Random Forest

Random Forest is just many Decision Trees joined together in a voting scheme. The core idea is that of "the wisdom of the corwd", such that if many trees vote for a given class (having being trained on different subsets of the training set), that class is probably the true class.

Towards Data Science has a more detailed guide on Random Forest and how it balances the trees with thebagging tecnique.

As easy as Decision Trees, Random Forest gets the exact same implementation with 0 bytes of RAM required (it actually needs as many bytes as the number of classes to store the votes, but that's really negligible): it just hard-codes all its composing trees.

XGBoost (Extreme Gradient Boosting)

Extreme Gradient Boosting is "Gradient Boosting on steroids" and has gained much attention from the Machine learning community due to its top results in many data competitions.

  1. "gradient boosting" refers to the process of chaining a number of trees so that each tree tries to learn from the errors of the previous
  2. "extreme" refers to many software and hardware optimizations that greatly reduce the time it takes to train the model

You can read the original paper about XGBoost here. For a discursive description head to KDNuggets, if you want some more math refer to this blog post on Medium.

Porting to plain C

If you followed my earlier posts on Gaussian Naive Bayes, SEFR, Relevant Vector Machine and Support Vector Machines, you already know how to port these new classifiers.

If you're new, you will need a couple things:

  1. install the micromlgen package with
pip install micromlgen
  1. (optionally, if you want to use Extreme Gradient Boosting) install the xgboost package with
pip install xgboost
  1. use the micromlgen.port function to generate your plain C code
from micromlgen import port
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import load_iris

clf = DecisionTreeClassifier()
X, y = load_iris(return_X_y=True)
clf.fit(X, y)
print(port(clf))

You can then copy-past the C code and import it in your sketch.

Using in the Arduino sketch

Once you have the classifier code, create a new project named TreeClassifierExample and copy the classifier code into a file named DecisionTree.h (or RandomForest.h or XGBoost.h depending on the model you chose).

The copy the following to the main ino file.

#include "DecisionTree.h"

Eloquent::ML::Port::DecisionTree clf;

void setup() {
    Serial.begin(115200);
    Serial.println("Begin");
}

void loop() {
    float irisSample[4] = {6.2, 2.8, 4.8, 1.8};

    Serial.print("Predicted label (you should see '2': ");
    Serial.println(clf.predict(irisSample));
    delay(1000);
}

Bechmarks

How do the 3 classifiers compare against each other?

We will evaluate a few keypoints:

  • training time
  • accuracy
  • needed RAM
  • needed Flash

for each classifier on a variety of datasets. I will report the results for RAM and Flash on the Arduino Nano old generation, so you should consider more the relative figures than the absolute ones.

Dataset Classifier Training
time (s)
Accuracy RAM
(bytes)
Flash
(bytes)
Gas Sensor Array Drift Dataset Decision Tree 1,6 0.781 ± 0.12 290 5722
13910 samples x 128 features Random Forest 3 0.865 ± 0.083 290 6438
6 classes XGBoost 18,8 0.878 ± 0.074 290 6506
Gesture Phase Segmentation Dataset Decision Tree 0,1 0.943 ± 0.005 290 5638
10000 samples x 19 features Random Forest 0,7 0.970 ± 0.004 306 6466
5 classes XGBoost 18,9 0.969 ± 0.003 306 6536
Drive Diagnosis Dataset Decision Tree 0,6 0.946 ± 0.005 306 5850
10000 samples x 48 features Random Forest 2,6 0.983 ± 0.003 306 6526
11 classes XGBoost 68,9 0.977 ± 0.005 306 6698

* all datasets are taken from the UCI Machine Learning datasets archive

I'm collecting more data for a complete benchmark, but in the meantime you can see that both Random Forest and XGBoost are on par: if not that XGBoost takes 5 to 25 times longer to train.

I've never used XGBoost, so I may be missing some tuning parameters, but for now Random Forest remains my favourite classifier.

Code listings

// example IRIS dataset classification with Decision Tree
int predict(float *x) {
  if (x[3] <= 0.800000011920929) {
      return 0;
  }
  else {
      if (x[3] <= 1.75) {
          if (x[2] <= 4.950000047683716) {
              if (x[0] <= 5.049999952316284) {
                  return 1;
              }
              else {
                  return 1;
              }
          }
          else {
              return 2;
          }
      }
      else {
          if (x[2] <= 4.950000047683716) {
              return 2;
          }
          else {
              return 2;
          }
      }
  }
}
// example IRIS dataset classification with Random Forest of 3 trees

int predict(float *x) {
  uint16_t votes[3] = { 0 };

  // tree #1
  if (x[0] <= 5.450000047683716) {
      if (x[1] <= 2.950000047683716) {
          votes[1] += 1;
      }
      else {
          votes[0] += 1;
      }
  }
  else {
      if (x[0] <= 6.049999952316284) {
          if (x[3] <= 1.699999988079071) {
              if (x[2] <= 3.549999952316284) {
                  votes[0] += 1;
              }
              else {
                  votes[1] += 1;
              }
          }
          else {
              votes[2] += 1;
          }
      }
      else {
          if (x[3] <= 1.699999988079071) {
              if (x[3] <= 1.449999988079071) {
                  if (x[0] <= 6.1499998569488525) {
                      votes[1] += 1;
                  }
                  else {
                      votes[1] += 1;
                  }
              }
              else {
                  votes[1] += 1;
              }
          }
          else {
              votes[2] += 1;
          }
      }
  }

  // tree #2
  if (x[0] <= 5.549999952316284) {
      if (x[2] <= 2.449999988079071) {
          votes[0] += 1;
      }
      else {
          if (x[2] <= 3.950000047683716) {
              votes[1] += 1;
          }
          else {
              votes[1] += 1;
          }
      }
  }
  else {
      if (x[3] <= 1.699999988079071) {
          if (x[1] <= 2.649999976158142) {
              if (x[3] <= 1.25) {
                  votes[1] += 1;
              }
              else {
                  votes[1] += 1;
              }
          }
          else {
              if (x[2] <= 4.1499998569488525) {
                  votes[1] += 1;
              }
              else {
                  if (x[0] <= 6.75) {
                      votes[1] += 1;
                  }
                  else {
                      votes[1] += 1;
                  }
              }
          }
      }
      else {
          if (x[0] <= 6.0) {
              votes[2] += 1;
          }
          else {
              votes[2] += 1;
          }
      }
  }

  // tree #3
  if (x[3] <= 1.75) {
      if (x[2] <= 2.449999988079071) {
          votes[0] += 1;
      }
      else {
          if (x[2] <= 4.8500001430511475) {
              if (x[0] <= 5.299999952316284) {
                  votes[1] += 1;
              }
              else {
                  votes[1] += 1;
              }
          }
          else {
              votes[1] += 1;
          }
      }
  }
  else {
      if (x[0] <= 5.950000047683716) {
          votes[2] += 1;
      }
      else {
          votes[2] += 1;
      }
  }

  // return argmax of votes
  uint8_t classIdx = 0;
  float maxVotes = votes[0];

  for (uint8_t i = 1; i < 3; i++) {
      if (votes[i] > maxVotes) {
          classIdx = i;
          maxVotes = votes[i];
      }
  }

  return classIdx;
}

L'articolo Decision Tree, Random Forest and XGBoost on Arduino proviene da Eloquent Arduino Blog.

]]>
Better word classification with Arduino Nano 33 BLE Sense and Machine Learning https://eloquentarduino.github.io/2020/08/better-word-classification-with-arduino-33-ble-sense-and-machine-learning/ Mon, 24 Aug 2020 17:04:57 +0000 https://eloquentarduino.github.io/?p=1282 Let's revamp the post I wrote about word classification using Machine Learning on Arduino, this time using a proper microphone (the MP34DT05 mounted on the Arduino Nano 33 BLE Sense) instead of a chinese, analog one: will the results improve? Updated on 16 October 2020: step by step explanation of the process with ready-made sketch […]

L'articolo Better word classification with Arduino Nano 33 BLE Sense and Machine Learning proviene da Eloquent Arduino Blog.

]]>
Let's revamp the post I wrote about word classification using Machine Learning on Arduino, this time using a proper microphone (the MP34DT05 mounted on the Arduino Nano 33 BLE Sense) instead of a chinese, analog one: will the results improve?

from https://www.udemy.com/course/learn-audio-processing-complete-engineers-course/

Updated on 16 October 2020: step by step explanation of the process with ready-made sketch code

What you'll learn

This tutorial will teach you how to capture audio from the Arduino Nano 33 BLE Sense microphone and classify it: at the end of this post, you will have a trained model able to detect in real-time the word you tell, among the ones that you trained it to recognize. The classification will occur directly on your Arduino board.

This is not a general-purpose speech recognizer able to convert speech-to-text: it works only on the words you train it on.

What you'll need

To install the software, open your terminal and install the libraries.

pip install -U scikit-learn
pip install -U micromlgen

Step 1. Capture audio samples

First of all, we need to capture a bunch of examples of the words we want to recognize.

In the original post, we used an analog microphone to record the audio. It is for sure the easiest way to interact with audio on a microcontroller since you only need to analogRead() the selected pin to get a value from the sensor.

This semplicity, however, comes at the cost of a nearly inexistent signal pre-processing from the sensor itself: most of the time, you will get junk - I don't want to be rude, but that's it.

Theory: Pulse-density modulation (a.k.a. PDM)

The microphone mounted on the Arduino Nano 33 BLE Sense (the MP34DT05) is fortunately much better than this: it gives you access to a modulated signal much more suitable for our processing needs.

The modulation used is pulse-density: I won't try to explain you how this works since I'm not an expert in DSP and neither it is the main scope of this article (refer to Wikipedia for some more information).

What matters to us is that we can grab an array of bytes from the microphone and extract its Root Mean Square (a.k.a. RMS) to be used as a feature for our Machine Learning model.

I had some difficulty finding examples on how to access the microphone on the Arduino Nano 33 BLE Sense board: fortunately, there's a Github repo from DelaGia that shows how to access all the sensors of the board.

I extracted the microphone part and incapsulated it in an easy to use class, so you don't really need to dig into the implementation details if you're not interested.

Practice: the code to capture the samples

When loaded on your Arduino Nano 33 BLE Sense, the following sketch will await for you to speak in front of the microphone: once it detects a sound, it will record 64 audio values and print them to the serial monitor.

From my experience, 64 samples are sufficient to cover short words such as yes, no, play, stop: if you plan to classify longer words, you may need to increase this number.

I suggest you keep the words short: longer words will probably decrease the accuracy of the model. If you want nonetheless a longer duration, at least keep the number of words as low as possible

Download the Arduino Nano 33 BLE Sense - Capture audio samples sketch, open it the Arduino IDE and flash it to your board.

Here's the main code.

#include "Mic.h"

// tune as per your needs
#define SAMPLES 64
#define GAIN (1.0f/50)
#define SOUND_THRESHOLD 2000

float features[SAMPLES];
Mic mic;

void setup() {
    Serial.begin(115200);
    PDM.onReceive(onAudio);
    mic.begin();
    delay(3000);
}

void loop() {
    // await for a word to be pronounced
    if (recordAudioSample()) {
        // print features to serial monitor
        for (int i = 0; i < SAMPLES; i++) {
            Serial.print(features[i], 6);
            Serial.print(i == SAMPLES - 1 ? '\n' : ',');
        }

        delay(1000);
    }

    delay(20);
}

/**
 * PDM callback to update mic object
 */
void onAudio() {
    mic.update();
}

/**
 * Read given number of samples from mic
 */
bool recordAudioSample() {
    if (mic.hasData() && mic.data() > SOUND_THRESHOLD) {

        for (int i = 0; i < SAMPLES; i++) {
            while (!mic.hasData())
                delay(1);

            features[i] = mic.pop() * GAIN;
        }

        return true;
    }

    return false;
}

Now that we have the acquisition logic in place, it's time for you to record some samples of the words you want to classify.

Action: capture the words examples

Now you have to capture as many samples of the words you want to classify as possible.

Open the serial monitor and pronounce a word near the microphone: a line of numbers will be printed on the monitor.

This is the description of your word.

You need many lines like this for an accurate prediction, so keep repeating the same word 15-30 times.

**My advice**: while recording the samples, vary both the distance of your mounth from the mic and the intensity of your voice: this will produce a more robust classification model later on.

After you repeated the same words many times, copy the content of the serial monitor and save it in a CSV file named after the word, for example yes.csv.

Then clear the serial monitor and repeat the process for each word.

Keep all these files in a folder because we need them to train our classifier.

Step 2. Train the machine learning model

Now that we have the samples, it's time to train the classifier.

Create a Python project in your favourite IDE or use your favourite text editor, if you don't have one.

As described in my post about how to train a classifier, we create a Python script that reads all the files inside a folder and concatenates them in a single array you feed to the classifier model.

Be sure your folder structure is like the following:

ArduinoWordClassification
  |-- train_classifier.py
  |-- data/
  |---- yes.csv
  |---- no.csv
  |---- play.csv
  |---- any other .csv file you recorded
# file: train_classifier.py

import numpy as np
from os.path import basename
from glob import glob
from sklearn.svm import SVC
from micromlgen import port
from sklearn.model_selection import train_test_split

def load_features(folder):
    dataset = None
    classmap = {}
    for class_idx, filename in enumerate(glob('%s/*.csv' % folder)):
        class_name = basename(filename)[:-4]
        classmap[class_idx] = class_name
        samples = np.loadtxt(filename, dtype=float, delimiter=',')
        labels = np.ones((len(samples), 1)) * class_idx
        samples = np.hstack((samples, labels))
        dataset = samples if dataset is None else np.vstack((dataset, samples))
    return dataset, classmap

np.random.seed(0)
dataset, classmap = load_features('data')
X, y = dataset[:, :-1], dataset[:, -1]
# this line is for testing your accuracy only: once you're satisfied with the results, set test_size to 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

clf = SVC(kernel='poly', degree=2, gamma=0.1, C=100)
clf.fit(X_train, y_train)

print('Accuracy', clf.score(X_test, y_test))
print('Exported classifier to plain C')
print(port(clf, classmap=classmap))

Among the classifiers I tried, SVM produced the best accuracy at 96% with 32 support vectors: it's not a super-tiny model, but it's quite small nevertheless.

If you're not satisifed with SVM, you can use Decision Tree, Random Forest, Gaussian Naive Bayes, Relevant Vector Machines. See my other posts for a detailed description of each.

In your console, after the accuracy score, you will have the plain C implementation of the classifier you trained. The following reports my SVM model.

// File: Classifier.h

#pragma once
namespace Eloquent {
    namespace ML {
        namespace Port {
            class SVM {
            public:
                /**
                * Predict class for features vector
                */
                int predict(float *x) {
                    float kernels[35] = { 0 };
                    float decisions[6] = { 0 };
                    int votes[4] = { 0 };
                    kernels[0] = compute_kernel(x,   33.0  , 41.0  , 47.0  , 54.0  , 59.0  , 61.0  , 56.0  , 51.0  , 50.0  , 51.0  , 44.0  , 32.0  , 23.0  , 15.0  , 12.0  , 8.0  , 5.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 5.0  , 3.0  , 5.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0 );
                    kernels[1] = compute_kernel(x,   40.0  , 50.0  , 51.0  , 60.0  , 56.0  , 57.0  , 58.0  , 53.0  , 50.0  , 45.0  , 42.0  , 34.0  , 23.0  , 16.0  , 10.0  , 7.0  , 3.0  , 3.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 14.0  , 3.0  , 8.0  , 0.0  , 0.0  , 3.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 3.0  , 0.0  , 0.0  , 5.0  , 3.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 3.0  , 0.0  , 5.0  , 3.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 0.0  , 3.0  , 0.0  , 0.0  , 0.0  , 3.0 );
                    kernels[2] = compute_kernel(x,   56.0  , 68.0  , 78.0  , 91.0  , 84.0  , 84.0  , 84.0  , 74.0  , 69.0  , 64.0  , 57.0  , 44.0  , 33.0  , 18.0  , 12.0  , 8.0  , 5.0  , 9.0  , 15.0  , 12.0  , 12.0  , 9.0  , 12.0  , 7.0  , 3.0  , 10.0  , 12.0  , 6.0  , 3.0  , 0.0  , 0.0  , 0.0  , 0.0  , 6.0  , 3.0  , 6.0  , 10.0  , 10.0  , 8.0  , 3.0  , 9.0  , 9.0  , 9.0  , 8.0  , 9.0  , 9.0  , 11.0  , 3.0  , 8.0  , 9.0  , 8.0  , 8.0  , 8.0  , 6.0  , 7.0  , 3.0  , 3.0  , 8.0  , 5.0  , 3.0  , 0.0  , 3.0  , 0.0  , 0.0 );

                    // ...many other kernels computations...

                    decisions[0] = 0.722587775297
                                   + kernels[1] * 3.35855e-07
                                   + kernels[2] * 1.64612e-07
                                   + kernels[4] * 6.00056e-07
                                   + kernels[5] * 3.5195e-08
                                   + kernels[7] * -4.2079e-08
                                   + kernels[8] * -4.2843e-08
                                   + kernels[9] * -9.994e-09
                                   + kernels[10] * -5.11065e-07
                                   + kernels[11] * -5.979e-09
                                   + kernels[12] * -4.4672e-08
                                   + kernels[13] * -1.5606e-08
                                   + kernels[14] * -1.2941e-08
                                   + kernels[15] * -2.18903e-07
                                   + kernels[17] * -2.31635e-07
                            ;
                    decisions[1] = -1.658344586719
                                   + kernels[0] * 2.45018e-07
                                   + kernels[1] * 4.30223e-07
                                   + kernels[3] * 1.00277e-07
                                   + kernels[4] * 2.16524e-07
                                   + kernels[18] * -4.81187e-07
                                   + kernels[20] * -5.10856e-07
                            ;
                    decisions[2] = -1.968607562265
                                   + kernels[0] * 3.001833e-06
                                   + kernels[3] * 4.5201e-08
                                   + kernels[4] * 1.54493e-06
                                   + kernels[5] * 2.81834e-07
                                   + kernels[25] * -5.93581e-07
                                   + kernels[26] * -2.89779e-07
                                   + kernels[27] * -1.73958e-06
                                   + kernels[28] * -1.09552e-07
                                   + kernels[30] * -3.09126e-07
                                   + kernels[31] * -1.294219e-06
                                   + kernels[32] * -5.37961e-07
                            ;
                    decisions[3] = -0.720663029823
                                   + kernels[6] * 1.4362e-08
                                   + kernels[7] * 6.177e-09
                                   + kernels[9] * 1.25e-08
                                   + kernels[10] * 2.05478e-07
                                   + kernels[12] * 2.501e-08
                                   + kernels[15] * 4.363e-07
                                   + kernels[16] * 9.147e-09
                                   + kernels[18] * -1.82182e-07
                                   + kernels[20] * -4.93707e-07
                                   + kernels[21] * -3.3084e-08
                            ;
                    decisions[4] = -1.605747746589
                                   + kernels[6] * 6.182e-09
                                   + kernels[7] * 1.3853e-08
                                   + kernels[8] * 2.12e-10
                                   + kernels[9] * 1.1243e-08
                                   + kernels[10] * 7.80681e-07
                                   + kernels[15] * 8.347e-07
                                   + kernels[17] * 1.64985e-07
                                   + kernels[23] * -4.25014e-07
                                   + kernels[25] * -1.134803e-06
                                   + kernels[34] * -2.52038e-07
                            ;
                    decisions[5] = -0.934328303475
                                   + kernels[19] * 3.3529e-07
                                   + kernels[20] * 1.121946e-06
                                   + kernels[21] * 3.44683e-07
                                   + kernels[22] * -6.23056e-07
                                   + kernels[24] * -1.4612e-07
                                   + kernels[28] * -1.24025e-07
                                   + kernels[29] * -4.31701e-07
                                   + kernels[31] * -9.2146e-08
                                   + kernels[33] * -3.8487e-07
                            ;
                    votes[decisions[0] > 0 ? 0 : 1] += 1;
                    votes[decisions[1] > 0 ? 0 : 2] += 1;
                    votes[decisions[2] > 0 ? 0 : 3] += 1;
                    votes[decisions[3] > 0 ? 1 : 2] += 1;
                    votes[decisions[4] > 0 ? 1 : 3] += 1;
                    votes[decisions[5] > 0 ? 2 : 3] += 1;
                    int val = votes[0];
                    int idx = 0;

                    for (int i = 1; i < 4; i++) {
                        if (votes[i] > val) {
                            val = votes[i];
                            idx = i;
                        }
                    }

                    return idx;
                }

                /**
                * Convert class idx to readable name
                */
                const char* predictLabel(float *x) {
                    switch (predict(x)) {
                        case 0:
                            return "no";
                        case 1:
                            return "stop";
                        case 2:
                            return "play";
                        case 3:
                            return "yes";
                        default:
                            return "Houston we have a problem";
                    }
                }

            protected:
                /**
                * Compute kernel between feature vector and support vector.
                * Kernel type: poly
                */
                float compute_kernel(float *x, ...) {
                    va_list w;
                    va_start(w, 64);
                    float kernel = 0.0;

                    for (uint16_t i = 0; i < 64; i++) {
                        kernel += x[i] * va_arg(w, double);
                    }

                    return pow((0.1 * kernel) + 0.0, 2);
                }
            };
        }
    }
}

Step 3. Deploy to your microcontroller

Now we have all the pieces we need to perform word classification on our Arduino board.

Download the Arduino Nano 33 BLE Sense - Audio classification sketch, open it in the Arduino IDE and paste the plain C code you got in the console inside the Classifier.h file (delete all its contents before!).

Fine: it's time to deploy!

Hit the upload button: if everything went fine, open the serial monitor and pronounce one of the words you recorded during Step 1.

Hopefully, you will read the word on the serial monitor.

Here's a quick demo (please forgive me for the bad video quality).


If you liked this tutorial and it helped you successfully implement word classification on your Arduino Nano 33 BLE Sense, please share it on your social media so others can benefit too.

If you have troubles or questions, don't hesitate to leave a comment: I will be happy to help you.

L'articolo Better word classification with Arduino Nano 33 BLE Sense and Machine Learning proviene da Eloquent Arduino Blog.

]]>
EloquentML grows its family of classifiers: Gaussian Naive Bayes on Arduino https://eloquentarduino.github.io/2020/08/eloquentml-grows-its-family-of-classifiers-gaussian-naive-bayes-on-arduino/ Sun, 02 Aug 2020 08:44:36 +0000 https://eloquentarduino.github.io/?p=1225 Are you looking for a top-performer classifiers with a minimal amount of parameters to tune? Look no further: Gaussian Naive Bayes is what you're looking for. And thanks to EloquentML you can now port it to your microcontroller. (Gaussian) Naive Bayes Naive Bayes classifiers are simple models based on the probability theory that can be […]

L'articolo EloquentML grows its family of classifiers: Gaussian Naive Bayes on Arduino proviene da Eloquent Arduino Blog.

]]>
Are you looking for a top-performer classifiers with a minimal amount of parameters to tune? Look no further: Gaussian Naive Bayes is what you're looking for. And thanks to EloquentML you can now port it to your microcontroller.

GaussianNB

(Gaussian) Naive Bayes

Naive Bayes classifiers are simple models based on the probability theory that can be used for classification.

They originate from the assumption of independence among the input variables. Even though this assumption doesn't hold true in the vast majority of the cases, they often perform very good at many classification tasks, so they're quite popular.

Gaussian Naive Bayes stack another (mostly wrong) assumption: that the variables exhibit a Gaussian probability distribution.

I (and many others like me) will never understand how it is possible that so many wrong assumptions lead to such good performances!

Nevertheless, what is important to us is that sklearn implements GaussianNB, so we easily train such a classifier.
The most interesting part is that GaussianNB can be tuned with just a single parameter: var_smoothing.

Don't ask me what it does in theory: in practice you change it and your accuracy can boost. This leads to an easy tuning process that doesn't involves expensive grid search.

import sklearn.datasets as d
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import normalize
from sklearn.naive_bayes import GaussianNB

def pick_best(X_train, X_test, y_train, y_test):
    best = (None, 0)
    for var_smoothing in range(-7, 1):
        clf = GaussianNB(var_smoothing=pow(10, var_smoothing))
        clf.fit(X_train, y_train)
        y_pred = clf.predict(X_test)
        accuracy = (y_pred == y_test).sum()
        if accuracy > best[1]:
            best = (clf, accuracy)
    print('best accuracy', best[1] / len(y_test))
    return best[0]

iris = d.load_iris()
X = normalize(iris.data)
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
clf = pick_best(X_train, X_test, y_train, y_test)

This simple procedure will train a bunch of classifiers with a different var_smoothing factor and pick the best performing one.

EloquentML integration

Once you have your trained classifier, porting it to C is as easy as always:

from micromlgen import port

clf = pick_best()
print(port(clf))

Always remember to run

pip install --upgrade micromlgen

port is a magic method able to port many classifiers: it will automatically detect the proper converter for you.

What does the exported code looks like?

#pragma once
namespace Eloquent {
    namespace ML {
        namespace Port {
            class GaussianNB {
                public:
                    /**
                    * Predict class for features vector
                    */
                    int predict(float *x) {
                        float votes[3] = { 0.0f };
                        float theta[4] = { 0 };
                        float sigma[4] = { 0 };
                        theta[0] = 0.801139789889; theta[1] = 0.54726920354; theta[2] = 0.234408773313; theta[3] = 0.039178084094;
                        sigma[0] = 0.000366881742; sigma[1] = 0.000907992556; sigma[2] = 0.000740960787; sigma[3] = 0.000274925514;
                        votes[0] = 0.333333333333 - gauss(x, theta, sigma);
                        theta[0] = 0.748563871324; theta[1] = 0.349390892644; theta[2] = 0.536186138345; theta[3] = 0.166747384117;
                        sigma[0] = 0.000529727082; sigma[1] = 0.000847956504; sigma[2] = 0.000690057342; sigma[3] = 0.000311828658;
                        votes[1] = 0.333333333333 - gauss(x, theta, sigma);
                        theta[0] = 0.704497203305; theta[1] = 0.318862439835; theta[2] = 0.593755956917; theta[3] = 0.217288784452;
                        sigma[0] = 0.000363782089; sigma[1] = 0.000813846722; sigma[2] = 0.000415475678; sigma[3] = 0.000758478249;
                        votes[2] = 0.333333333333 - gauss(x, theta, sigma);
                        // return argmax of votes
                        uint8_t classIdx = 0;
                        float maxVotes = votes[0];

                        for (uint8_t i = 1; i < 3; i++) {
                            if (votes[i] > maxVotes) {
                                classIdx = i;
                                maxVotes = votes[i];
                            }
                        }

                        return classIdx;
                    }

                protected:
                    /**
                    * Compute gaussian value
                    */
                    float gauss(float *x, float *theta, float *sigma) {
                        float gauss = 0.0f;

                        for (uint16_t i = 0; i < 4; i++) {
                            gauss += log(sigma[i]);
                            gauss += pow(x[i] - theta[i], 2) / sigma[i];
                        }

                        return gauss;
                    }
                };
            }
        }
    }

Finding this content useful?

As you can see, we need a couple of "weight vectors":

  • theta is the mean of each feature
  • sigma is the standard deviation

The computation is quite thin: just a couple of operations; the class with the highest score is then selected.

Benchmarks

Following there's a recap of a couple benchmarks I run on an Arduino Nano 33 Ble Sense.

Classifier Dataset Flash RAM Execution time Accuracy
GaussianNB Iris (150x4) 82 kb 42 Kb 65 ms 97%
LinearSVC Iris (150x4) 83 Kb 42 Kb 76 ms 99%
GaussianNB Breast cancer (80x40) 90 Kb 42 Kb 160 ms 77%
LinearSVC Breast cancer (80x40) 112 Kb 42 Kb 378 ms 73%
GaussianNB Wine (100x13) 85 Kb 42 Kb 130 ms 97%
LinearSVC Wine (100x13) 89 Kb 42 Kb 125 ms 99%

We can see that the accuracy is on par with a linear SVM, reaching up to 97% on some datasets. Its semplicity shines with high-dimensional datasets (breast cancer) where execution time is half of the LinearSVC: I can see this pattern repeating with other real-world, medium-sized datasets.


This is it, you can find the example project on Github.

L'articolo EloquentML grows its family of classifiers: Gaussian Naive Bayes on Arduino proviene da Eloquent Arduino Blog.

]]>
Incremental multiclass classification on microcontrollers: One vs One https://eloquentarduino.github.io/2020/04/incremental-multiclass-classification-on-microcontrollers-one-vs-one/ Sun, 26 Apr 2020 08:01:14 +0000 https://eloquentarduino.github.io/?p=1079 In earlier posts I showed you can run incremental binary classification on your microcontroller with Stochastic Gradient Descent or Passive-Aggressive classifier. Now it is time to upgrade your toolbelt with a new item: One-vs-One multiclass classifier. One vs One Many classifiers are, by nature, binary: they can only distinguish the positive class from the negative […]

L'articolo Incremental multiclass classification on microcontrollers: One vs One proviene da Eloquent Arduino Blog.

]]>
In earlier posts I showed you can run incremental binary classification on your microcontroller with Stochastic Gradient Descent or Passive-Aggressive classifier. Now it is time to upgrade your toolbelt with a new item: One-vs-One multiclass classifier.

One vs One

Many classifiers are, by nature, binary: they can only distinguish the positive class from the negative one. Many of real-world problems, however, are multiclass: you have 3 or more possible outcomes to distinguish from.

There are a couple of ways to achieve this:

  1. One vs All: if your classifier is able to output a confidence score of its prediction, for N classes you train N classifiers, each able to recognize a single class. During inference, you pick the "most confident" one.
  2. One vs One: for N classes, you train N * (N-1) / 2 classifiers, one for each couple of classes. During inference, each classifier makes a prediction and you pick the class with the highest number of votes.

Since SGD and Passive-Aggressive don't output a confidence score, I implemented the One vs One algorithm to tackle the multiclass classification problem on microcontrollers.

Actually, One vs One is not a new type of classifier: it is really a "coordinator" class that sorts which samples go to which classifier. You can still choose your own classifier type to use.

As SGD and Passive-Aggressive, OneVsOne implements the classifier interface, so you will use the well known fitOne and predict methods.

Finding this content useful?

Example code

// Esp32 has some problems with min/max
#define min(a, b) (a) < (b) ? (a) : (b)
#define max(a, b) (a) > (b) ? (a) : (b)
// you will actually need only one of SGD or PassiveAggressive
#include "EloquentSGD.h"
#include "EloquentPassiveAggressive.h"
#include "EloquentOneVsOne.h"
#include "EloquentAccuracyScorer.h"
// this file defines NUM_FEATURES, NUM_CLASSES, TRAIN_SAMPLES and TEST_SAMPLES
#include "dataset.h"

using namespace Eloquent::ML;

void setup() {
  Serial.begin(115200);
  delay(3000);
}

void loop() {
  AccuracyScorer scorer;
  // OneVsOne needs the actual classifier class, the number of features and the number of classes
  OneVsOne<SGD<FEATURES_DIM>, FEATURES_DIM, NUM_CLASSES> clf;

  // clf.set() propagates the configuration to the actual classifiers
  // if a parameter does not exists on the classifier, it does nothing
  // in this example, alpha and momentum refer to SGD, C to Passive-Aggressive
  clf.set("alpha", 1);
  clf.set("momentum", 0.7);
  clf.set("C", 0.1);

  // fit
  // I noticed that repeating the training a few times over the same dataset increases performance  to a certain extent: if you re-train it too much, performance will decay
  for (unsigned int i = 0; i < TRAIN_SAMPLES * 5; i++) {
      clf.fitOne(X_train[i % TRAIN_SAMPLES], y_train[i % TRAIN_SAMPLES]);
  }

  // predict
  for (int i = 0; i < TEST_SAMPLES; i++) {
      int y_true = y_test[i];
      int y_pred = clf.predict(X_test[i]);

      Serial.print("Predicted ");
      Serial.print(y_pred);
      Serial.print(" vs ");
      Serial.println(y_true);
      scorer.scoreOne(y_true, y_pred);
  }

  Serial.print("Accuracy = ");
  Serial.print(scorer.accuracy() * 100);
  Serial.print(" out of ");
  Serial.print(scorer.support());
  Serial.println(" samples");
  delay(30000);
}

If you refer to the previous posts on SGD and Passive-Aggressive, you'll notice that you would be able to replace one with the other and your code will change by 1 single line only. This let's you experiment to find the best configuration for your project without hassle.

Accuracy

Well, accuracy vary.

In my tests, I couldn't get predictable accuracy on all datasets. I couldn't even get acceptable accuracy on the Iris dataset (60% max). But I got 90% accuracy on the Digits dataset from scikit-learn with 6 classes.

You have to experiment. Try Passive-Aggressive with many C values. If it doesn't work, try SGD with varying momentum and alpha. Try to repeat the training over the dataset 5, 10 times.

In a next post I'll report my benchmarks so you can see what works for you and what not.
This is an emerging field for me, so I will need time to master it.


As always, you can find the examle on Github with a the dataset to experiment with.

L'articolo Incremental multiclass classification on microcontrollers: One vs One proviene da Eloquent Arduino Blog.

]]>
Easy Tensorflow TinyML on ESP32 and Arduino https://eloquentarduino.github.io/2020/01/easy-tinyml-on-esp32-and-arduino/ Sat, 25 Jan 2020 19:36:29 +0000 https://eloquentarduino.github.io/?p=864 In this post I will show you how to easily deploy your Tensorflow Lite model to an ESP32 using the Arduino IDE without any compilation stuff. So I finally settled on giving a try to TinyML, which is a way to deploy Tensorflow Lite models to microcontrollers. As a first step, I downloaded the free […]

L'articolo Easy Tensorflow TinyML on ESP32 and Arduino proviene da Eloquent Arduino Blog.

]]>
In this post I will show you how to easily deploy your Tensorflow Lite model to an ESP32 using the Arduino IDE without any compilation stuff.

tf arduino esp

So I finally settled on giving a try to TinyML, which is a way to deploy Tensorflow Lite models to microcontrollers.
As a first step, I downloaded the free chapters from the TinyML book website and rapidly skimmed through them.

Let me say that, even if it starts from "too beginner" level for me (they explain why you need to use the arrow instead of the point to access a pointer's property), it is a very well written book. They uncover every single aspect you may encounter during your first steps and give a very sound introduction to the general topic of training, validating and testing a dataset on a model.

If I will go on with this TinyML stuff, I'll probably buy a copy: I strongly recommend you to at least read the free sample.

Once done reading the 6 chapters, I wanted to try the described tutorial on my ESP32. Sadly, it is not mentioned in the supported boards on the book, so I had to solve it by myself.

In this post I'm going to make a sort of recap of my learnings about the steps you need to follow to implement TF models to a microcontroller and introduce you to a tiny library I wrote for the purpose of facilitating the deployment in the Arduino IDE: EloquentTinyML.

Building our first model

First of all, we need a model to deploy.

The book guides us on building a neural network capable of predicting the sine value of a given number, in the range from 0 to Pi (3.14).

It's an easy model to get started (the "Hello world" of machine learning, according to the authors), so we'll stick with it.

I won't go into too much details about generating data and training the classifier, because I suppose you already know that part if you want to port Tensorflow on a microcontroller.

Here's the code from the book.

import math
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers

def get_model():
    SAMPLES = 1000
    np.random.seed(1337)
    x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
    # shuffle and add noise
    np.random.shuffle(x_values)
    y_values = np.sin(x_values)
    y_values += 0.1 * np.random.randn(*y_values.shape)

    # split into train, validation, test
    TRAIN_SPLIT =  int(0.6 * SAMPLES)
    TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
    x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
    y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])

    # create a NN with 2 layers of 16 neurons
    model = tf.keras.Sequential()
    model.add(layers.Dense(16, activation='relu', input_shape=(1,)))
    model.add(layers.Dense(16, activation='relu'))
    model.add(layers.Dense(1))
    model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
    model.fit(x_train, y_train, epochs=200, batch_size=16,
                        validation_data=(x_validate, y_validate))
    return model

Exporting the model

Now that we have a model, we need to convert it into a form ready to be deployed on our microcontroller. This is actually just an array of bytes that the TF interpreter will read to recreate the model.

model = get_model()
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()

# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model)

Then you have to convert to a C array in the command line.

xxd -i sine_model_quantized.tflite > sine_model_quantized.cc

This is copy-paste code that hardly would change, so, for ease my development cycle, I wrapped this little snippet in a tiny package you can use: it's called tinymlgen.

pip install tinymlgen
from tinymlgen import port

model = get_model()
c_code = port(model, pretty_print=True)
print(c_code)

I point you to the Github repo for a couple more options you can configure.

Using this package, you don't have to open a terminal and use the xxd program to get a usable result.

Finding this content useful?

Use the model

Now it is finally the time we deploy the model on our microcontroller.

This part can be tricky, actually, if you don't have one of the supported boards in the book (Arduino Nano 33, SparkFun Edge or STM32F746G Discovery kit).

I tried just setting "ESP32" as my target in the Arduino IDE and I got tons of errors.

Luckily for us, a man called Wezley Sherman wrote a tutorial on how to get a TinyML project to compile using the PlatformIO environment. He saved me the effort to try to fix all the broken import errors on my own.

Since I could get the project to compile using PlatformIO (which I don't use in my everyday tinkering), I settled to get the project to compile in the Arduino IDE.

Fortunately, it was not difficult at all, so I can finally bring you this library that does all the heavy lifting for you.

Thanks to the library, you won't need to download the full Tensorflow Lite framework and compile it on your own machine: it has been already done for you.

As an added bonus, I created a wrapper class that incapsulates all the boring repetitive stuff, so you can focus solely on the application logic.

Install the library from the library manager in the Arduino IDE: search for "EloquentTinyML", or from Github first.

git clone https://github.com/eloquentarduino/EloquentTinyML.git

#EloquentTinyML escapes you from compiling Tensforflow on your own machine
Click To Tweet


Here is an example on how you use it.

#include "EloquentTinyML.h"
// sine_model.h contains the array you exported from the previous step
// with either xxd or tinymlgen
#include "sine_model.h"

#define NUMBER_OF_INPUTS 1
#define NUMBER_OF_OUTPUTS 1
// in future projects you may need to tweak this value
// it's a trial and error process
#define TENSOR_ARENA_SIZE 2*1024

Eloquent::TinyML::TfLite<NUMBER_OF_INPUTS, NUMBER_OF_OUTPUTS, TENSOR_ARENA_SIZE> ml(sine_model);

void setup() {
    Serial.begin(115200);
}

void loop() {
    // pick up a random x and predict its sine
    float x = 3.14 * random(100) / 100;
    float y = sin(x);
    float input[1] = { x };
    float predicted = ml.predict(input);

    Serial.print("sin(");
    Serial.print(x);
    Serial.print(") = ");
    Serial.print(y);
    Serial.print("\t predicted: ");
    Serial.println(predicted);
    delay(1000);
}

Does it look easy to use? I bet so.

For simple cases like this example where you have a single output, the predict method returns that output so you can esaily assign it to a variable.

If this is not the case and you expect multiple output from your model, you have to declare an output array.

float input[10] = { ... };
float output[5] = { 0 };

ml.predict(input, output);

You will find the complete code on Github, with the sine_model.h file too.

Wrapping up

I hoped this post helped you kickstart your next TinyML project on your ESP32.

It served me as a foundation for the next experiments I'm willing to do on this platform which is really in its early stages, so needs a lot of investigation about its capabilities.

I plan to do a comparison with my MicroML framework when I get more experience in both, so staty tuned for the upcoming updates.

Disclaimer

I tested the library on both Ubuntu 18.04 and Windows 10 64 bit: if you are on a different platform and get compiling errors, please let me know in the comments so I can fix them.

L'articolo Easy Tensorflow TinyML on ESP32 and Arduino proviene da Eloquent Arduino Blog.

]]>