Have you ever wanted to perform person detection on Arduino or ESP32 boards? Found it difficult to custimize the sketch to suit your needs? Fear no more!
Person detection on Arduino and ESP32 microcontrollers doesn't have to be difficult: with the right library, you only need 3 lines of code to perform state-of-the-art person detection. You use TensorFlow Neural Networks without any boilerplate and verbose code using the EloquentTinyML library.
Are you looking for a project to get started with Machine Learning on Arduino and don't know where to start?
Do most of the tutorials on Arduino gesture recognition you found on the internet look too complicated for you?
You're in the right place!
In this tutorial I'm going to show you one of the easiest possible ways to get started with Machine Learning on Arduino boards while also creating something useful: a gesture recognition system based on an accelerometer.
In this short post we'll take a project that uses Edge Implulse and a Neural Network to classify Covid patients' health and implement it using a completely different approach: a sliding window with basic statistics (min/max/mean/std).
At the end of the post you may be wandering: "do I really need Neural Networks?"
If you have an internet-connected board, you can now load Tensorflow Lite Tinyml models on-demand directly from the internet! This way you can repurpose your board for different applications without flashing new firmware. Let's see how in this tutorial.
If your board has internet connectivity (either Ethernet or Wifi), you may want to load different models as per user needs, or maybe you host your own models and want to keep them updated so they improve the end user experience without requiring firmware update.
Whatever your use-case, it really is very easy to download a model from the internet, much similar to how we did for the SD card. It is a 3 step process:
connect to internet (either Ethernet or WiFi)
download model from URL
initialize Tensorflow from the downloaded model
The whole sketch is quite short and mostly contains boilerplate code (connect to wifi, make HTTP request, run Tensorflow tinyml inference). I will make use of the EloquentTinyML library because it makes using Tf painless.
The sketch should work on many different boards without any (significant) modification: I tested it on an ESP32, but you could use the new Arduino RP2040 Connect for example.
As always, we'll load the sine model from an HTTP server (in a next post I will show the HTTPS version).
#include <SPI.h>
#include <WiFi.h>
// include WifiNINA instead of WiFi for Arduino boards
// #include <WiFiNINA.h>
#include <HttpClient.h>
#include <EloquentTinyML.h>
#include <eloquent_tinyml/tensorflow.h>
#define N_INPUTS 1
#define N_OUTPUTS 1
#define TENSOR_ARENA_SIZE 2*1024
char SSID[] = "NetworkSSID";
char PASS[] = "Password";
// this is a server I owe that doesn't require HTTPS, you can replace with whatever server you have at hand
// that supports HTTP
const char server[] = "152.228.173.213";
const char path[] = "/sine.bin";
WiFiClient client;
HttpClient http(client);
uint8_t *model;
Eloquent::TinyML::TensorFlow::TensorFlow<N_INPUTS, N_OUTPUTS, TENSOR_ARENA_SIZE> tf;
void setup() {
Serial.begin(115200);
delay(2000);
wifi_connect();
download_model();
// init Tf from loaded model
if (!tf.begin(model)) {
Serial.println("Cannot inialize model");
Serial.println(ml.errorMessage());
delay(60000);
}
else {
Serial.println("Model loaded, starting inference");
}
}
void loop() {
// pick up a random x and predict its sine
float x = 3.14 * random(100) / 100;
float y = sin(x);
float input[1] = { x };
float predicted = tf.predict(input);
Serial.print("sin(");
Serial.print(x);
Serial.print(") = ");
Serial.print(y);
Serial.print("\t predicted: ");
Serial.println(predicted);
delay(1000);
}
/**
* Connect to wifi
*/
void wifi_connect() {
int status = WL_IDLE_STATUS;
while (status != WL_CONNECTED) {
Serial.print("Attempting to connect to SSID: ");
Serial.println(SSID);
status = WiFi.begin(SSID, PASS);
delay(1000);
}
Serial.println("Connected to wifi");
}
/**
* Download model from URL
*/
void download_model() {
http.get(server, path);
http.responseStatusCode();
http.skipResponseHeaders();
int modelSize = http.contentLength();
Serial.print("Model size is: ");
Serial.println(modelSize);
Serial.println();
model = (uint8_t*) malloc(modelSize);
http.read(model, modelSize);
}
Check the full project code on Github and remember to star!
In this short post we'll take a look at how lo load Tensorflow Lite models exported as a C header file from the filesystem, be it an SD card or the built-in SPIFFS filesystem on ESP32 devices.