{ "version": "https://jsonfeed.org/version/1.1", "user_comment": "This feed allows you to read the posts from this site in any feed reader that supports the JSON Feed format. To add this feed to your reader, copy the following URL -- https://eloquentarduino.github.io/tag/incremental-learning/feed/json/ -- and add it your reader.", "home_page_url": "https://eloquentarduino.github.io/tag/incremental-learning/", "feed_url": "https://eloquentarduino.github.io/tag/incremental-learning/feed/json/", "language": "en-US", "title": "incremental-learning – Eloquent Arduino Blog", "description": "Machine learning on Arduino, programming & electronics", "items": [ { "id": "https://eloquentarduino.github.io/?p=1079", "url": "https://eloquentarduino.github.io/2020/04/incremental-multiclass-classification-on-microcontrollers-one-vs-one/", "title": "Incremental multiclass classification on microcontrollers: One vs One", "content_html": "

In earlier posts I showed you can run incremental binary classification on your microcontroller with Stochastic Gradient Descent or Passive-Aggressive classifier. Now it is time to upgrade your toolbelt with a new item: One-vs-One multiclass classifier.

\n

\n

One vs One

\n

Many classifiers are, by nature, binary: they can only distinguish the positive class from the negative one. Many of real-world problems, however, are multiclass: you have 3 or more possible outcomes to distinguish from.

\n

There are a couple of ways to achieve this:

\n
    \n
  1. One vs All: if your classifier is able to output a confidence score of its prediction, for N classes you train N classifiers, each able to recognize a single class. During inference, you pick the "most confident" one.
  2. \n
  3. One vs One: for N classes, you train N * (N-1) / 2 classifiers, one for each couple of classes. During inference, each classifier makes a prediction and you pick the class with the highest number of votes.
  4. \n
\n

Since SGD and Passive-Aggressive don't output a confidence score, I implemented the One vs One algorithm to tackle the multiclass classification problem on microcontrollers.

\n

Actually, One vs One is not a new type of classifier: it is really a "coordinator" class that sorts which samples go to which classifier. You can still choose your own classifier type to use.

\n

As SGD and Passive-Aggressive, OneVsOne implements the classifier interface, so you will use the well known fitOne and predict methods.

\n\r\n
\r\n
\r\n
\r\n\t

Finding this content useful?

\r\n
\r\n\t\r\n
\r\n\t
\r\n\t\t
\r\n\t\t
\r\n\t
\r\n
\r\n
\r\n
\r\n
\r\n
\r\n\r\n\n

Example code

\n
// Esp32 has some problems with min/max\n#define min(a, b) (a) < (b) ? (a) : (b)\n#define max(a, b) (a) > (b) ? (a) : (b)\n// you will actually need only one of SGD or PassiveAggressive\n#include "EloquentSGD.h"\n#include "EloquentPassiveAggressive.h"\n#include "EloquentOneVsOne.h"\n#include "EloquentAccuracyScorer.h"\n// this file defines NUM_FEATURES, NUM_CLASSES, TRAIN_SAMPLES and TEST_SAMPLES\n#include "dataset.h"\n\nusing namespace Eloquent::ML;\n\nvoid setup() {\n  Serial.begin(115200);\n  delay(3000);\n}\n\nvoid loop() {\n  AccuracyScorer scorer;\n  // OneVsOne needs the actual classifier class, the number of features and the number of classes\n  OneVsOne<SGD<FEATURES_DIM>, FEATURES_DIM, NUM_CLASSES> clf;\n\n  // clf.set() propagates the configuration to the actual classifiers\n  // if a parameter does not exists on the classifier, it does nothing\n  // in this example, alpha and momentum refer to SGD, C to Passive-Aggressive\n  clf.set("alpha", 1);\n  clf.set("momentum", 0.7);\n  clf.set("C", 0.1);\n\n  // fit\n  // I noticed that repeating the training a few times over the same dataset increases performance  to a certain extent: if you re-train it too much, performance will decay\n  for (unsigned int i = 0; i < TRAIN_SAMPLES * 5; i++) {\n      clf.fitOne(X_train[i % TRAIN_SAMPLES], y_train[i % TRAIN_SAMPLES]);\n  }\n\n  // predict\n  for (int i = 0; i < TEST_SAMPLES; i++) {\n      int y_true = y_test[i];\n      int y_pred = clf.predict(X_test[i]);\n\n      Serial.print("Predicted ");\n      Serial.print(y_pred);\n      Serial.print(" vs ");\n      Serial.println(y_true);\n      scorer.scoreOne(y_true, y_pred);\n  }\n\n  Serial.print("Accuracy = ");\n  Serial.print(scorer.accuracy() * 100);\n  Serial.print(" out of ");\n  Serial.print(scorer.support());\n  Serial.println(" samples");\n  delay(30000);\n}
\n

If you refer to the previous posts on SGD and Passive-Aggressive, you'll notice that you would be able to replace one with the other and your code will change by 1 single line only. This let's you experiment to find the best configuration for your project without hassle.

\n

Accuracy

\n

Well, accuracy vary.

\n

In my tests, I couldn't get predictable accuracy on all datasets. I couldn't even get acceptable accuracy on the Iris dataset (60% max). But I got 90% accuracy on the Digits dataset from scikit-learn with 6 classes.

\n

You have to experiment. Try Passive-Aggressive with many C values. If it doesn't work, try SGD with varying momentum and alpha. Try to repeat the training over the dataset 5, 10 times.

\n

In a next post I'll report my benchmarks so you can see what works for you and what not.
\nThis is an emerging field for me, so I will need time to master it.

\n
\n

As always, you can find the examle on Github with a the dataset to experiment with.

\n

L'articolo Incremental multiclass classification on microcontrollers: One vs One proviene da Eloquent Arduino Blog.

\n", "content_text": "In earlier posts I showed you can run incremental binary classification on your microcontroller with Stochastic Gradient Descent or Passive-Aggressive classifier. Now it is time to upgrade your toolbelt with a new item: One-vs-One multiclass classifier.\n\nOne vs One\nMany classifiers are, by nature, binary: they can only distinguish the positive class from the negative one. Many of real-world problems, however, are multiclass: you have 3 or more possible outcomes to distinguish from.\nThere are a couple of ways to achieve this:\n\nOne vs All: if your classifier is able to output a confidence score of its prediction, for N classes you train N classifiers, each able to recognize a single class. During inference, you pick the "most confident" one.\nOne vs One: for N classes, you train N * (N-1) / 2 classifiers, one for each couple of classes. During inference, each classifier makes a prediction and you pick the class with the highest number of votes.\n\nSince SGD and Passive-Aggressive don't output a confidence score, I implemented the One vs One algorithm to tackle the multiclass classification problem on microcontrollers.\nActually, One vs One is not a new type of classifier: it is really a "coordinator" class that sorts which samples go to which classifier. You can still choose your own classifier type to use.\nAs SGD and Passive-Aggressive, OneVsOne implements the classifier interface, so you will use the well known fitOne and predict methods.\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nExample code\n// Esp32 has some problems with min/max\n#define min(a, b) (a) < (b) ? (a) : (b)\n#define max(a, b) (a) > (b) ? (a) : (b)\n// you will actually need only one of SGD or PassiveAggressive\n#include "EloquentSGD.h"\n#include "EloquentPassiveAggressive.h"\n#include "EloquentOneVsOne.h"\n#include "EloquentAccuracyScorer.h"\n// this file defines NUM_FEATURES, NUM_CLASSES, TRAIN_SAMPLES and TEST_SAMPLES\n#include "dataset.h"\n\nusing namespace Eloquent::ML;\n\nvoid setup() {\n Serial.begin(115200);\n delay(3000);\n}\n\nvoid loop() {\n AccuracyScorer scorer;\n // OneVsOne needs the actual classifier class, the number of features and the number of classes\n OneVsOne<SGD<FEATURES_DIM>, FEATURES_DIM, NUM_CLASSES> clf;\n\n // clf.set() propagates the configuration to the actual classifiers\n // if a parameter does not exists on the classifier, it does nothing\n // in this example, alpha and momentum refer to SGD, C to Passive-Aggressive\n clf.set("alpha", 1);\n clf.set("momentum", 0.7);\n clf.set("C", 0.1);\n\n // fit\n // I noticed that repeating the training a few times over the same dataset increases performance to a certain extent: if you re-train it too much, performance will decay\n for (unsigned int i = 0; i < TRAIN_SAMPLES * 5; i++) {\n clf.fitOne(X_train[i % TRAIN_SAMPLES], y_train[i % TRAIN_SAMPLES]);\n }\n\n // predict\n for (int i = 0; i < TEST_SAMPLES; i++) {\n int y_true = y_test[i];\n int y_pred = clf.predict(X_test[i]);\n\n Serial.print("Predicted ");\n Serial.print(y_pred);\n Serial.print(" vs ");\n Serial.println(y_true);\n scorer.scoreOne(y_true, y_pred);\n }\n\n Serial.print("Accuracy = ");\n Serial.print(scorer.accuracy() * 100);\n Serial.print(" out of ");\n Serial.print(scorer.support());\n Serial.println(" samples");\n delay(30000);\n}\nIf you refer to the previous posts on SGD and Passive-Aggressive, you'll notice that you would be able to replace one with the other and your code will change by 1 single line only. This let's you experiment to find the best configuration for your project without hassle.\nAccuracy\nWell, accuracy vary.\nIn my tests, I couldn't get predictable accuracy on all datasets. I couldn't even get acceptable accuracy on the Iris dataset (60% max). But I got 90% accuracy on the Digits dataset from scikit-learn with 6 classes.\nYou have to experiment. Try Passive-Aggressive with many C values. If it doesn't work, try SGD with varying momentum and alpha. Try to repeat the training over the dataset 5, 10 times.\nIn a next post I'll report my benchmarks so you can see what works for you and what not.\nThis is an emerging field for me, so I will need time to master it.\n\nAs always, you can find the examle on Github with a the dataset to experiment with.\nL'articolo Incremental multiclass classification on microcontrollers: One vs One proviene da Eloquent Arduino Blog.", "date_published": "2020-04-26T10:01:14+02:00", "date_modified": "2020-04-26T11:52:29+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "incremental-learning", "microml", "ml", "Arduino Machine learning" ] } ] }