{ "version": "https://jsonfeed.org/version/1.1", "user_comment": "This feed allows you to read the posts from this site in any feed reader that supports the JSON Feed format. To add this feed to your reader, copy the following URL -- https://eloquentarduino.github.io/feed/json/ -- and add it your reader.", "home_page_url": "https://eloquentarduino.github.io/", "feed_url": "https://eloquentarduino.github.io/feed/json/", "language": "en-US", "title": "Eloquent Arduino Blog", "description": "Machine learning on Arduino, programming & electronics", "items": [ { "id": "https://eloquentarduino.github.io/?p=1416", "url": "https://eloquentarduino.github.io/2020/12/tinyml-benchmark-table/", "title": "The Grand Benchmark Table of Embedded Machine Learning", "content_html": "
How tiny is TinyML? How fast is TinyML?
\nDo you want to get some REAL numbers on embedded machine learning on Arduino, STM32, ESP32, Seeedstudio boards (and more coming)?
\nThis page will answer all your questions!
\n\n\n
If you're new to this blog, you need to know that (almost one year ago) I settled on a mission to bring machine learning to embedded microcontrollers of all sizes (even the Attiny85!).
\nTo me, it is just insane to deploy heavyweight Neural Networks to such small devices, if you don't need their expressiveness (mainly image and audio analysis). The vast majority of embedded ML tasks is, in fact, related to sensors' readings, which can easily be solved with "traditional" ML algorithms.
\nToday's industry seems to be more leaned toward Neural Networks, though, so I thought it would be beneficial for you readers to get an actual grasp on the potential of traditional Machine learning algorithms in the embedded context.
\nOn this blog you can find posts about:
\nAll these algorithms go a long way in both accuracy and resource comsumption, so (in my opinion) they should be your first choice when developing a new project.
\nTo support my claimings I made a huge effort to collect real world data, and now I want to share this data with you.
\nBefore you ask:
\n"Are Neural Networks models benchmarked here?". No.
\n"Will Neural Networks model be benchmarked in the future?". Yes, as soon as I'm comfortable with them: I want to create a fair comparison between NN and traditional algorithms.
\nSo now let's move to the contents.
\nI run the benchmarks on the boards I have at hand: they were all purchased by me, except for the Arduino Nano BLE Sense (given to me by the Arduino team).
\nI picked a small selection of toy and real world datasets to benchmark the classifiers against (the real world ones were picked from a TinyML Talks presentation when easily available, plus some more from the UCI database almost at random).
\nHere's the list of the benchmarked datasets, with the shape of the dataset (in the format number of samples
x number of features
x number of classes
).
(150 x 4 x 3)
: from the sklearn package(178 x 13 x 3)
: from the sklearn package(1797 x 64 x 10)
: from the sklearn package(10299 x 561 x 6)
(4800 x 180 x 10)
(1000 x 128 x 6)
(1648 x 63 x 5 )
(1000 x 19 x 5)
(846 x 18 x 4)
(830 x 4 x 2)
(1000 x 48 x 11)
The datasets are chosen to be representative of different domains and the list will grow in the next weeks.
\nSome datasets are used as-is, others were pre-processed with very light feature extraction. In detail:
\nHuman Activity
features were extracted with a rolling window, and for each window min/max/avg/std/skew/kurtosis were calculatedSport Activity
got the same pre-processing, and the number of actvities was reduced from 19 to 10EMG
features were extracted with a rolling window, and for each window the Root Mean Square value was calculatedThe reported benchmarks only consider the inference process: any feature extraction is not included! Nevertheless, only features with linear time complexity were used, so any MCU will have no problem in computing them.
\nThe following classifiers are benchmarked:
\nWhy these classifiers?
\nBecause they're all supported by the micromlgen package, so they can easily be ported to plain C.
\n* XGBoost porting failed on some datasets, so you will see holes in the data. I will correct this in the next weeks
\nmicromlgen
actually supports Support Vector Machines, too: it is not included because on real world datasets the number of support vector is so high (hundreds or even thousands) that no single board could handle that.
If you want to stay up to date with the new numbers, subscribe to the newsletter: I promise you won't receive more than 1 mail per month.
\n\r\nThis section reports (a selection of) the charts generated from the benchmark results to give you a quick glance of the capabilities of the aforementioned boards and algorithms in terms of performance and accuracy.
\nIf you like an interactive view of the data, there's a Colab Notebook that reproduces the charts reported here, where you can interact with the data as you like.
\nAt the very end of the article, you can also find a link to the raw CSV file I generated (as you can see, it required A LOT of work to create).
\nThe overall accuracy of each classifier on each dataset (this plot is not bounded to any particular board, it is computed "offline").
\n\nComment: many classifiers (Random Forest, XGBoost, Logistic Regression) can easily achieve up to 95+ % accuracy on some datasets with minimal pre-processing, while still scoring 85+ % on more difficult datasets.
\nThese charts plot, for each dataset, how much flash (in percent on the total available) it takes for the classifier to compile (visit the Colab Notebook to see all the charts).
\n\n\nComment: DecisionTree, GaussianNB and Logistic Regression require the least amount of flash. XGBoost is very "flash-intensive"; RandomForest sits in the middle.
\nAs low as 6% of flash size for a fully functional DecisionTree with 85+% accuracy.
\nThese charts plot, for each dataset, how long it takes for the classifier to run (only the classification, no feature extraction!).
\n\n\nComment: DecisionTree is the clear winner here, with minimal inference time (from 0.4 to 30 microseconds), followed by Random Forest. Logistic Regression, XGBoost and GaussianNB are the slowest.
\nAs fast as sub-millisecond inference time for a fully functional DecisionTree with 85+% accuracy.
\nThis plot correlates the inference time vs the classification accuracy. The more upper-left a point is, the better (fast inference time, high accuracy).
\nClick here to open the image at full size
\n\nComment: as already stated, you will see a lot of blue markers (Decision Tree) in the top left, since it is very fast and quite accurate. Moving to the right you can see purple (Logistic Regression) and orange (Random Forest). GaussianNB (red) exhibits quite low accuracy instead.
\nThis plot correlates the inference time vs the the (relative) flash requirement. The more lower-left a point is, the better (fast inference time, low flash requirements).
\nClick here to open the image at full size
\n\nComment: Again, we see blue (Decision Tree) is both fast and small, followed by Logistic Regression and Random Forest. Now it is clear that XGBoost (green), while not being the slowest, is the more demanding in terms of flash.
\nI hope this post helped you broaden your view on TinyML, on how tiny it can be, how fast it can be (sub-millisecond inference!), how wide it is.
\nPlease don't hesitate to comment with your opinion on the subject, suggestions of new boards or datasets I should benchmark, or any other idea you have in mind that can contribute to the purpose of this page.
\nAnd don't forget to stay tuned for the updates: I already have 2 more boards I will benchmark in the next days!
\nAs promised, here's the link to the raw benchmarks in CSV format.
\nYou can run your own analysis and visualization on it: if you use it in your own work, please add a link to this post.
\nIn future posts I will share how I collected all those numbers, so subscribe to the newsletter to stay up to date!
\n\r\nL'articolo The Grand Benchmark Table of Embedded Machine Learning proviene da Eloquent Arduino Blog.
\n", "content_text": "How tiny is TinyML? How fast is TinyML?\nDo you want to get some REAL numbers on embedded machine learning on Arduino, STM32, ESP32, Seeedstudio boards (and more coming)? \nThis page will answer all your questions!\n\n\nBackground\nIf you're new to this blog, you need to know that (almost one year ago) I settled on a mission to bring machine learning to embedded microcontrollers of all sizes (even the Attiny85!).\nTo me, it is just insane to deploy heavyweight Neural Networks to such small devices, if you don't need their expressiveness (mainly image and audio analysis). The vast majority of embedded ML tasks is, in fact, related to sensors' readings, which can easily be solved with "traditional" ML algorithms.\nToday's industry seems to be more leaned toward Neural Networks, though, so I thought it would be beneficial for you readers to get an actual grasp on the potential of traditional Machine learning algorithms in the embedded context.\nOn this blog you can find posts about:\n\nDecision Tree, Random Forest and XGBoost\nGaussian Naive Bayes\nSEFR - a binary classifier\nPCA for dimensionality reduction\nRelevant Vector Machines\nSVM for gesture detection\nOne Class SVM for anomaly detection\n\nAll these algorithms go a long way in both accuracy and resource comsumption, so (in my opinion) they should be your first choice when developing a new project.\nTo support my claimings I made a huge effort to collect real world data, and now I want to share this data with you.\nBefore you ask:\n"Are Neural Networks models benchmarked here?". No.\n"Will Neural Networks model be benchmarked in the future?". Yes, as soon as I'm comfortable with them: I want to create a fair comparison between NN and traditional algorithms.\nSo now let's move to the contents.\nThe boards\nI run the benchmarks on the boards I have at hand: they were all purchased by me, except for the Arduino Nano BLE Sense (given to me by the Arduino team).\n\nEspressif ESP32\nEspressif ESP8266 NodeMCU v1.0\nSTM32 Nucleo L432KC (Cortex M4)\nSeeedstudio XIAO (SAMD21 Cortex M0)\nArduino Nano 33 BLE Sense (Cortex M4F)\n\nThe datasets\nI picked a small selection of toy and real world datasets to benchmark the classifiers against (the real world ones were picked from a TinyML Talks presentation when easily available, plus some more from the UCI database almost at random).\nHere's the list of the benchmarked datasets, with the shape of the dataset (in the format number of samples x number of features x number of classes).\n\nIris (150 x 4 x 3): from the sklearn package\nWine (178 x 13 x 3): from the sklearn package\nDigits (1797 x 64 x 10): from the sklearn package\nHuman Activity (10299 x 561 x 6)\nSport Activity (4800 x 180 x 10)\nGas Sensor Array Drift (1000 x 128 x 6)\nEMG (1648 x 63 x 5 )\nGesture Phase Segmentaion (1000 x 19 x 5)\nStatlog (Vehicle Silhouettes) (846 x 18 x 4)\nMammographic Mass (830 x 4 x 2)\nSensorless Drive Diagnosis (1000 x 48 x 11)\n\nThe datasets are chosen to be representative of different domains and the list will grow in the next weeks.\nSome datasets are used as-is, others were pre-processed with very light feature extraction. In detail:\n\nHuman Activity features were extracted with a rolling window, and for each window min/max/avg/std/skew/kurtosis were calculated\nSport Activity got the same pre-processing, and the number of actvities was reduced from 19 to 10\nEMG features were extracted with a rolling window, and for each window the Root Mean Square value was calculated\n\nThe reported benchmarks only consider the inference process: any feature extraction is not included! Nevertheless, only features with linear time complexity were used, so any MCU will have no problem in computing them.\nThe classifiers\nThe following classifiers are benchmarked:\n\nDecision Tree\nRandom Forest\nXGBoost\nLogistic Regression\nGaussian Naive Bayes\n\nWhy these classifiers?\nBecause they're all supported by the micromlgen package, so they can easily be ported to plain C.\n* XGBoost porting failed on some datasets, so you will see holes in the data. I will correct this in the next weeks\nmicromlgen actually supports Support Vector Machines, too: it is not included because on real world datasets the number of support vector is so high (hundreds or even thousands) that no single board could handle that.\nIf you want to stay up to date with the new numbers, subscribe to the newsletter: I promise you won't receive more than 1 mail per month.\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nThe Results\nThis section reports (a selection of) the charts generated from the benchmark results to give you a quick glance of the capabilities of the aforementioned boards and algorithms in terms of performance and accuracy.\nIf you like an interactive view of the data, there's a Colab Notebook that reproduces the charts reported here, where you can interact with the data as you like.\nAt the very end of the article, you can also find a link to the raw CSV file I generated (as you can see, it required A LOT of work to create).\nAccuracy\nThe overall accuracy of each classifier on each dataset (this plot is not bounded to any particular board, it is computed "offline").\n\nComment: many classifiers (Random Forest, XGBoost, Logistic Regression) can easily achieve up to 95+ % accuracy on some datasets with minimal pre-processing, while still scoring 85+ % on more difficult datasets.\nFlash percent\nThese charts plot, for each dataset, how much flash (in percent on the total available) it takes for the classifier to compile (visit the Colab Notebook to see all the charts).\n\n\nComment: DecisionTree, GaussianNB and Logistic Regression require the least amount of flash. XGBoost is very "flash-intensive"; RandomForest sits in the middle.\nHow tiny can TinyML be?\nAs low as 6% of flash size for a fully functional DecisionTree with 85+% accuracy.\nInference time\nThese charts plot, for each dataset, how long it takes for the classifier to run (only the classification, no feature extraction!).\n\n\nComment: DecisionTree is the clear winner here, with minimal inference time (from 0.4 to 30 microseconds), followed by Random Forest. Logistic Regression, XGBoost and GaussianNB are the slowest.\nHow fast can TinyML be?\nAs fast as sub-millisecond inference time for a fully functional DecisionTree with 85+% accuracy.\nInference time vs Accuracy\nThis plot correlates the inference time vs the classification accuracy. The more upper-left a point is, the better (fast inference time, high accuracy).\nClick here to open the image at full size\n\nComment: as already stated, you will see a lot of blue markers (Decision Tree) in the top left, since it is very fast and quite accurate. Moving to the right you can see purple (Logistic Regression) and orange (Random Forest). GaussianNB (red) exhibits quite low accuracy instead.\nInference time vs Flash percent\nThis plot correlates the inference time vs the the (relative) flash requirement. The more lower-left a point is, the better (fast inference time, low flash requirements).\nClick here to open the image at full size\n\nComment: Again, we see blue (Decision Tree) is both fast and small, followed by Logistic Regression and Random Forest. Now it is clear that XGBoost (green), while not being the slowest, is the more demanding in terms of flash.\nConclusions\nI hope this post helped you broaden your view on TinyML, on how tiny it can be, how fast it can be (sub-millisecond inference!), how wide it is.\nPlease don't hesitate to comment with your opinion on the subject, suggestions of new boards or datasets I should benchmark, or any other idea you have in mind that can contribute to the purpose of this page.\nAnd don't forget to stay tuned for the updates: I already have 2 more boards I will benchmark in the next days!\n\nAs promised, here's the link to the raw benchmarks in CSV format.\nYou can run your own analysis and visualization on it: if you use it in your own work, please add a link to this post.\nIn future posts I will share how I collected all those numbers, so subscribe to the newsletter to stay up to date!\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nL'articolo The Grand Benchmark Table of Embedded Machine Learning proviene da Eloquent Arduino Blog.", "date_published": "2020-12-16T21:31:10+01:00", "date_modified": "2020-12-20T17:13:28+01:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "Arduino Machine learning" ] }, { "id": "https://eloquentarduino.github.io/?p=1390", "url": "https://eloquentarduino.github.io/2020/12/esp32-cam-motion-detection-with-photo-capture-grayscale-version/", "title": "Esp32-cam motion detection WITH PHOTO CAPTURE! (grayscale version)", "content_html": "Do you want to transform your cheap esp32-cam in a DIY surveillance camera with moton detection AND photo capture?
\nLook no further: this post explains STEP-BY-STEP all you need to know to build one yourself!
\n\n\n
As I told you in the Easier, faster pure video Esp32-cam motion detection post, motion detection on the esp32-cam seems to be the hottest topic on my blog, so I thought it deserved some more tutorials.
\nWithout question, to #1 request you made me in the comments was
\n\n\nHow can I save the image that triggered the motion detection to the disk?
\n
Well, in this post I will show you how to save the image to the SPIFFS filesystem your esp32-cam comes equipped with!
\nPlease read the post on easier, faster esp32-cam motion detection first if you want to understand the following code.
\nIt took me quite some time to write this post because I was struggling to design a clear, easy to use API for the motion detection feature and the image storage.
\nAnd I have to admit that, even after so long, I'm still not satisfied with the results.
\nNonetheless, it works, and it works well in my opinion, so I will publish this and maybe get feedback from you to help me improve (so please leave a comment if you have any suggestion).
\nI won't bother you with the design considerations I took since this is an hands-on tutorial, so let's take a look at the code to implement motion detection on the esp32-cam or any other esp32 with a camera attached (I'm using the M5Stick camera).
\nFirst of all, you need the EloquentVision
library: you can install it either from Github or using the Arduino IDE's Library Manager.
Next, the code.
\n// Change according to your model\n// The models available are\n// - CAMERA_MODEL_WROVER_KIT\n// - CAMERA_MODEL_ESP_EYE\n// - CAMERA_MODEL_M5STACK_PSRAM\n// - CAMERA_MODEL_M5STACK_WIDE\n// - CAMERA_MODEL_AI_THINKER\n#define CAMERA_MODEL_M5STACK_WIDE\n\n#include <FS.h>\n#include <SPIFFS.h>\n#include "EloquentVision.h"\n\n// set the resolution of the source image and the resolution of the downscaled image for the motion detection\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define CHANNELS 1\n#define DEST_WIDTH 32\n#define DEST_HEIGHT 24\n#define BLOCK_VARIATION_THRESHOLD 0.3\n#define MOTION_THRESHOLD 0.2\n\n// we're using the Eloquent::Vision namespace a lot!\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::IO;\nusing namespace Eloquent::Vision::ImageProcessing;\nusing namespace Eloquent::Vision::ImageProcessing::Downscale;\nusing namespace Eloquent::Vision::ImageProcessing::DownscaleStrategies;\n\n// an easy interface to capture images from the camera\nESP32Camera camera;\n// the buffer to store the downscaled version of the image\nuint8_t resized[DEST_HEIGHT][DEST_WIDTH];\n// the downscaler algorithm\n// for more details see https://eloquentarduino.github.io/2020/05/easier-faster-pure-video-esp32-cam-motion-detection\nCross<SOURCE_WIDTH, SOURCE_HEIGHT, DEST_WIDTH, DEST_HEIGHT> crossStrategy;\n// the downscaler container\nDownscaler<SOURCE_WIDTH, SOURCE_HEIGHT, CHANNELS, DEST_WIDTH, DEST_HEIGHT> downscaler(&crossStrategy);\n// the motion detection algorithm\nMotionDetection<DEST_WIDTH, DEST_HEIGHT> motion;\n\nvoid setup() {\n Serial.begin(115200);\n SPIFFS.begin(true);\n camera.begin(FRAME_SIZE, PIXFORMAT_GRAYSCALE);\n motion.setBlockVariationThreshold(BLOCK_VARIATION_THRESHOLD);\n}\n\nvoid loop() {\n camera_fb_t *frame = camera.capture();\n\n // resize image and detect motion\n downscaler.downscale(frame->buf, resized);\n motion.update(resized);\n motion.detect();\n\n if (motion.ratio() > MOTION_THRESHOLD) {\n Serial.println("Motion detected");\n\n // here we want to save the image to disk\n }\n}
\nFine, we can detect motion!
\nNow we want to save the triggering image to disk in a format that we can decode without any custom software. It would be cool if we could see the image using the native Esp32 Filesystem Browser sketch.
\nThankfully to the guys at espressif, the esp32 is able to encode a raw image to JPEG format: it is convenient to use (any PC on earth can read a jpeg) and it is also fast.
\nand thanks to the reader ankaiser for pointing it out
\nIt's really easy to do thanks to the EloquentVision library.
\nif (motion.ratio() > MOTION_THRESHOLD) {\n Serial.println("Motion detected");\n\n // quality ranges from 10 to 64 -> the higher, the more detailed\n uint8_t quality = 30;\n JpegWriter<SOURCE_WIDTH, SOURCE_HEIGHT> jpegWriter;\n File imageFile = SPIFFS.open("/capture.jpg", "wb");\n\n // it takes < 1 second for a 320x240 image and 4 Kb of space\n jpegWriter.writeGrayscale(imageFile, frame->buf, quality);\n imageFile.close();\n}
\nWell done! Now your image is on the disk and can be downloaded with the FSBrowser sketch.
\nNow you have all the tools you need to create your own DIY surveillance camera with motion detection feature!
\nYou can use it to catch thieves (I discourage you to rely on such a rudimentary setup however!), to capture images of wild animals in your garden (birds, sqirrels or the like), or any other application you see fit.
\nOf course you may well understand that a proper motion detection setup should be more complex than the one presented here. Nevertheless, a couple of quick fixes can greatly improve the usability of this project with little effort. Here I suggest you a couple.
\n#1: Debouncing successive frames: the code presented in this post is a stripped down version of a more complete esp32-cam motion detection example sketch.
\nThat sketch implements a debouncing function to prevent writing "ghost images" (see the original post on motion detection for a clear evidence of this effect).
\n#2: Proper file naming: the example sketch uses a fixed filename for the image. This means any new image will overwrite the older, which may be undesiderable based on your requirements. A proper way to handle this would be to attach an RTC and name the image after the time it occurred (something like "motion_2020-12-03_08:09:10.bmp")
\n#3: RGB images: this is something I'm working on. I mean, the Bitmap writer is there (so you could actually use it to store images on your esp32), but the multi-channel motion detection is driving me crazy, I need some more time to design it the way I want, so stay tuned!
\nI hope you enjoyed this tutorial on esp32-cam motion detection with photo capture: it was born as a response to your asking, so don't be afraid and ask me anything: I will do my best to help you!
\nL'articolo Esp32-cam motion detection WITH PHOTO CAPTURE! (grayscale version) proviene da Eloquent Arduino Blog.
\n", "content_text": "Do you want to transform your cheap esp32-cam in a DIY surveillance camera with moton detection AND photo capture?\nLook no further: this post explains STEP-BY-STEP all you need to know to build one yourself!\n\n\nAs I told you in the Easier, faster pure video Esp32-cam motion detection post, motion detection on the esp32-cam seems to be the hottest topic on my blog, so I thought it deserved some more tutorials.\nWithout question, to #1 request you made me in the comments was\n\nHow can I save the image that triggered the motion detection to the disk?\n\nWell, in this post I will show you how to save the image to the SPIFFS filesystem your esp32-cam comes equipped with!\nMotion detection, refactored\nPlease read the post on easier, faster esp32-cam motion detection first if you want to understand the following code.\nIt took me quite some time to write this post because I was struggling to design a clear, easy to use API for the motion detection feature and the image storage.\nAnd I have to admit that, even after so long, I'm still not satisfied with the results.\nNonetheless, it works, and it works well in my opinion, so I will publish this and maybe get feedback from you to help me improve (so please leave a comment if you have any suggestion).\nI won't bother you with the design considerations I took since this is an hands-on tutorial, so let's take a look at the code to implement motion detection on the esp32-cam or any other esp32 with a camera attached (I'm using the M5Stick camera).\nFirst of all, you need the EloquentVision library: you can install it either from Github or using the Arduino IDE's Library Manager.\nNext, the code.\n// Change according to your model\n// The models available are\n// - CAMERA_MODEL_WROVER_KIT\n// - CAMERA_MODEL_ESP_EYE\n// - CAMERA_MODEL_M5STACK_PSRAM\n// - CAMERA_MODEL_M5STACK_WIDE\n// - CAMERA_MODEL_AI_THINKER\n#define CAMERA_MODEL_M5STACK_WIDE\n\n#include <FS.h>\n#include <SPIFFS.h>\n#include "EloquentVision.h"\n\n// set the resolution of the source image and the resolution of the downscaled image for the motion detection\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define CHANNELS 1\n#define DEST_WIDTH 32\n#define DEST_HEIGHT 24\n#define BLOCK_VARIATION_THRESHOLD 0.3\n#define MOTION_THRESHOLD 0.2\n\n// we're using the Eloquent::Vision namespace a lot!\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::IO;\nusing namespace Eloquent::Vision::ImageProcessing;\nusing namespace Eloquent::Vision::ImageProcessing::Downscale;\nusing namespace Eloquent::Vision::ImageProcessing::DownscaleStrategies;\n\n// an easy interface to capture images from the camera\nESP32Camera camera;\n// the buffer to store the downscaled version of the image\nuint8_t resized[DEST_HEIGHT][DEST_WIDTH];\n// the downscaler algorithm\n// for more details see https://eloquentarduino.github.io/2020/05/easier-faster-pure-video-esp32-cam-motion-detection\nCross<SOURCE_WIDTH, SOURCE_HEIGHT, DEST_WIDTH, DEST_HEIGHT> crossStrategy;\n// the downscaler container\nDownscaler<SOURCE_WIDTH, SOURCE_HEIGHT, CHANNELS, DEST_WIDTH, DEST_HEIGHT> downscaler(&crossStrategy);\n// the motion detection algorithm\nMotionDetection<DEST_WIDTH, DEST_HEIGHT> motion;\n\nvoid setup() {\n Serial.begin(115200);\n SPIFFS.begin(true);\n camera.begin(FRAME_SIZE, PIXFORMAT_GRAYSCALE);\n motion.setBlockVariationThreshold(BLOCK_VARIATION_THRESHOLD);\n}\n\nvoid loop() {\n camera_fb_t *frame = camera.capture();\n\n // resize image and detect motion\n downscaler.downscale(frame->buf, resized);\n motion.update(resized);\n motion.detect();\n\n if (motion.ratio() > MOTION_THRESHOLD) {\n Serial.println("Motion detected");\n\n // here we want to save the image to disk\n }\n}\nSave image to disk\nFine, we can detect motion!\nNow we want to save the triggering image to disk in a format that we can decode without any custom software. It would be cool if we could see the image using the native Esp32 Filesystem Browser sketch.\nThankfully to the guys at espressif, the esp32 is able to encode a raw image to JPEG format: it is convenient to use (any PC on earth can read a jpeg) and it is also fast.\nand thanks to the reader ankaiser for pointing it out\nIt's really easy to do thanks to the EloquentVision library.\nif (motion.ratio() > MOTION_THRESHOLD) {\n Serial.println("Motion detected");\n\n // quality ranges from 10 to 64 -> the higher, the more detailed\n uint8_t quality = 30;\n JpegWriter<SOURCE_WIDTH, SOURCE_HEIGHT> jpegWriter;\n File imageFile = SPIFFS.open("/capture.jpg", "wb");\n\n // it takes < 1 second for a 320x240 image and 4 Kb of space\n jpegWriter.writeGrayscale(imageFile, frame->buf, quality);\n imageFile.close();\n}\nWell done! Now your image is on the disk and can be downloaded with the FSBrowser sketch.\nNow you have all the tools you need to create your own DIY surveillance camera with motion detection feature!\nYou can use it to catch thieves (I discourage you to rely on such a rudimentary setup however!), to capture images of wild animals in your garden (birds, sqirrels or the like), or any other application you see fit.\nFurther improvements\nOf course you may well understand that a proper motion detection setup should be more complex than the one presented here. Nevertheless, a couple of quick fixes can greatly improve the usability of this project with little effort. Here I suggest you a couple.\n#1: Debouncing successive frames: the code presented in this post is a stripped down version of a more complete esp32-cam motion detection example sketch.\nThat sketch implements a debouncing function to prevent writing "ghost images" (see the original post on motion detection for a clear evidence of this effect).\n#2: Proper file naming: the example sketch uses a fixed filename for the image. This means any new image will overwrite the older, which may be undesiderable based on your requirements. A proper way to handle this would be to attach an RTC and name the image after the time it occurred (something like "motion_2020-12-03_08:09:10.bmp")\n#3: RGB images: this is something I'm working on. I mean, the Bitmap writer is there (so you could actually use it to store images on your esp32), but the multi-channel motion detection is driving me crazy, I need some more time to design it the way I want, so stay tuned!\n\nI hope you enjoyed this tutorial on esp32-cam motion detection with photo capture: it was born as a response to your asking, so don't be afraid and ask me anything: I will do my best to help you!\nL'articolo Esp32-cam motion detection WITH PHOTO CAPTURE! (grayscale version) proviene da Eloquent Arduino Blog.", "date_published": "2020-12-03T18:50:59+01:00", "date_modified": "2020-12-06T09:31:20+01:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "Computer vision", "Eloquent library" ] }, { "id": "https://eloquentarduino.github.io/?p=1365", "url": "https://eloquentarduino.github.io/2020/11/tinyml-on-arduino-and-stm32-cnn-convolutional-neural-network-example/", "title": "TinyML on Arduino and STM32: CNN (Convolutional Neural Network) example", "content_html": "Painless TinyML Convolutional Neural Network on your Arduino and STM32 boards: the MNIST dataset example!
\nAre you fascinated by TinyML and Tensorflow for microcontrollers?
\nDo you want to run a CNN (Convolutional Neural Network) on your Arduino and STM32 boards?
\nDo you want to do it without pain?
\nEloquentTinyML is the library for you!
\n\n\n
EloquentTinyML, my library to easily run Tensorflow Lite neural networks on Arduino microcontrollers, is gaining some popularity so I think it's time for a good tutorial on the topic.
\nIf you're a seasoned follower of my blog, you may know that I don't really like Tensorflow on microcontrollers, because it is often "over-sized" for the project at hand and there are leaner, faster alternatives.
\nNonetheless, Tensorflow is gaining much popularity in the embedded world so I'll try to give my contribute too.
\nIn this tutorial, I'm going to show you step by step how to train a CNN in Tensorflow and deploy it to you board: I tested the code both on the Arduino Nano 33 BLE Sense and the STM32 Nucleus L432KC.
\nI'm not an expert either in Tensorflow nor Convolutional Neural Networks, so I kept the project as simple as possible. I used an image-like dataset to create a setup where CNN should perform well: the dataset is the MNIST handwritten digits one.
\n\nIt is composed by 8x8 images of handwritten digits, from 0 to 9 and can be easily imported via the scikit-learn
Python package.
Regarding the CNN topology, I wanted to stay as lean as possible: the goal of this tutorial is to teach you how to deploy your own network, not about achieving 100% accuracy.
\nLet's see step by step how to produce a usable model.
\nWe will need numpy
and Tensorflow
, of course, plus scikit-learn
to load the dataset and tinymlgen to port the CNN to plain C.
import numpy as np\nfrom sklearn.datasets import load_digits\nimport tensorflow as tf\nfrom tensorflow.keras import layers\nfrom tinymlgen import port
\nTo train the network, we need:
\ntraining data
: this is the data the network uses to learn its weightsvalidation data
: this is the data the network uses to understand if it's doing well during learningtest data
: this is the data we use to test the network accuracy once it's done learningdef get_data():\n np.random.seed(1337)\n x_values, y_values = load_digits(return_X_y=True)\n x_values /= x_values.max()\n # reshape to (8 x 8 x 1)\n x_values = x_values.reshape((len(x_values), 8, 8, 1))\n\n # split into train, validation, test\n TRAIN_SPLIT = int(0.6 * len(x_values))\n TEST_SPLIT = int(0.2 * len(x_values) + TRAIN_SPLIT)\n x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])\n y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])\n\n return x_train, x_test, x_validate, y_train, y_test, y_validate
\nNow we have to create our network topology.
\nAs I stated earlier, I wanted to keep this as simple as possible (also considering that we're using a toy dataset): I added a single convolution layer (without even max pooling) followed by the output layer.
\ndef get_model():\n x_train, x_test, x_validate, y_train, y_test, y_validate = get_data()\n\n # create a CNN\n model = tf.keras.Sequential()\n model.add(layers.Conv2D(8, (3, 3), activation='relu', input_shape=(8, 8, 1)))\n # model.add(layers.MaxPooling2D((2, 2)))\n model.add(layers.Flatten())\n model.add(layers.Dense(len(np.unique(y_train))))\n\n model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])\n model.fit(x_train, y_train, epochs=50, batch_size=16,\n validation_data=(x_validate, y_validate))\n return model, x_test, y_test
\nDo you think this topology is too simple to learn something useful in so few epochs?
\nThink again: it achieved 97% accuracy!
\nNot bad.
\ndef test_model(model, x_test, y_test):\n x_test = (x_test / x_test.max()).reshape((len(x_test), 8, 8, 1))\n y_pred = model.predict(x_test).argmax(axis=1)\n\n print('ACCURACY', (y_pred == y_test).sum() / len(y_test))
\nOnce we have a trained model that performs well, we want to deploy it to our microcontroller. Thanks to the tinymlgen
packages, is as easy as a one-liner.
if __name__ == '__main__':\n model, x_test, y_test = get_model()\n test_model(model, x_test, y_test)\n c_code = port(model, variable_name='digits_model', pretty_print=True)\n print(c_code)
\nOk, now we have the content we need to create an Arduino sketch to run the CNN on our microcontroller.
\nWe will use the EloquentTinyML
library to do this without pain.
This is a library to run TinyML models on your microcontroller without messing around with complex compilation procedures and esoteric errors.
\nYou must first install the library at its latest version (0.0.5 or 0.0.4 if not available), either via the Library Manager or directly from Github.
\n#include <EloquentTinyML.h>\n\n// copy the printed code from tinymlgen into this file\n#include "digits_model.h"\n\n#define NUMBER_OF_INPUTS 64\n#define NUMBER_OF_OUTPUTS 10\n#define TENSOR_ARENA_SIZE 8*1024\n\nEloquent::TinyML::TfLite<NUMBER_OF_INPUTS, NUMBER_OF_OUTPUTS, TENSOR_ARENA_SIZE> ml;\n\nvoid setup() {\n Serial.begin(115200);\n ml.begin(digits_model);\n}\n\nvoid loop() {\n // a random sample from the MNIST dataset (precisely the last one)\n float x_test[64] = { 0., 0. , 0.625 , 0.875 , 0.5 , 0.0625, 0. , 0. ,\n 0. , 0.125 , 1. , 0.875 , 0.375 , 0.0625, 0. , 0. ,\n 0. , 0. , 0.9375, 0.9375, 0.5 , 0.9375, 0. , 0. ,\n 0. , 0. , 0.3125, 1. , 1. , 0.625 , 0. , 0. ,\n 0. , 0. , 0.75 , 0.9375, 0.9375, 0.75 , 0. , 0. ,\n 0. , 0.25 , 1. , 0.375 , 0.25 , 1. , 0.375 , 0. ,\n 0. , 0.5 , 1. , 0.625 , 0.5 , 1. , 0.5 , 0. ,\n 0. , 0.0625, 0.5 , 0.75 , 0.875 , 0.75 , 0.0625, 0. };\n // the output vector for the model predictions\n float y_pred[10] = {0};\n // the actual class of the sample\n int y_test = 8;\n\n // let's see how long it takes to classify the sample\n uint32_t start = micros();\n\n ml.predict(x_test, y_pred);\n\n uint32_t timeit = micros() - start;\n\n Serial.print("It took ");\n Serial.print(timeit);\n Serial.println(" micros to run inference");\n\n // let's print the raw predictions for all the classes\n // these values are not directly interpretable as probabilities!\n Serial.print("Test output is: ");\n Serial.println(y_test);\n Serial.print("Predicted proba are: ");\n\n for (int i = 0; i < 10; i++) {\n Serial.print(y_pred[i]);\n Serial.print(i == 9 ? '\\n' : ',');\n }\n\n // let's print the "most probable" class\n // you can either use probaToClass() if you also want to use all the probabilities\n Serial.print("Predicted class is: ");\n Serial.println(ml.probaToClass(y_pred));\n // or you can skip the predict() method and call directly predictClass()\n Serial.print("Sanity check: ");\n Serial.println(ml.predictClass(x_test));\n\n delay(1000);\n}
\nThat's it: if everything went fine, you should see that the predicted class is 8
.
I'll report the figures I get for compiling and running this project on the two boards I used.
\nBoard | \nFlash | \nRAM | \nInference time | \n
---|---|---|---|
Nucleus L432KC | \n154560 | \nnot available* | \n7187 | \n
Arduino Nano 33 BLE Sense | \n197656 | \n56160 | \n9400 | \n
I used the Grumpyoldpizza compiler for the Nucleus, which doesn't report back the RAM usage
\nWere you able to deploy a CNN to your microcontroller thanks to this tutorial? Or are you having troubles?
\nLet me know in the comment and I will help you or share your experience with us.
\nYou can find the whole code on Github.
\nL'articolo TinyML on Arduino and STM32: CNN (Convolutional Neural Network) example proviene da Eloquent Arduino Blog.
\n", "content_text": "Painless TinyML Convolutional Neural Network on your Arduino and STM32 boards: the MNIST dataset example!\nAre you fascinated by TinyML and Tensorflow for microcontrollers? \nDo you want to run a CNN (Convolutional Neural Network) on your Arduino and STM32 boards? \nDo you want to do it without pain? \nEloquentTinyML is the library for you!\n\n\nEloquentTinyML, my library to easily run Tensorflow Lite neural networks on Arduino microcontrollers, is gaining some popularity so I think it's time for a good tutorial on the topic.\nIf you're a seasoned follower of my blog, you may know that I don't really like Tensorflow on microcontrollers, because it is often "over-sized" for the project at hand and there are leaner, faster alternatives.\nNonetheless, Tensorflow is gaining much popularity in the embedded world so I'll try to give my contribute too.\nIn this tutorial, I'm going to show you step by step how to train a CNN in Tensorflow and deploy it to you board: I tested the code both on the Arduino Nano 33 BLE Sense and the STM32 Nucleus L432KC.\nTable of contentsHow to train a CNN in TensorflowStep 1. Import the librariesStep 2. Generate train, validation and test dataStep 3. Create and train the modelStep 4. Testing the model accuracyStep 5. Exporting the modelHow to run a CNN on Arduino and STM32 boards with EloquentTinyMLCNN on Arduino and STM32 figuresAnd you?\nHow to train a CNN in Tensorflow\nI'm not an expert either in Tensorflow nor Convolutional Neural Networks, so I kept the project as simple as possible. I used an image-like dataset to create a setup where CNN should perform well: the dataset is the MNIST handwritten digits one.\n\nIt is composed by 8x8 images of handwritten digits, from 0 to 9 and can be easily imported via the scikit-learn Python package.\nRegarding the CNN topology, I wanted to stay as lean as possible: the goal of this tutorial is to teach you how to deploy your own network, not about achieving 100% accuracy.\nLet's see step by step how to produce a usable model.\nStep 1. Import the libraries\nWe will need numpy and Tensorflow, of course, plus scikit-learn to load the dataset and tinymlgen to port the CNN to plain C.\nimport numpy as np\nfrom sklearn.datasets import load_digits\nimport tensorflow as tf\nfrom tensorflow.keras import layers\nfrom tinymlgen import port\nStep 2. Generate train, validation and test data\nTo train the network, we need:\n\ntraining data: this is the data the network uses to learn its weights\nvalidation data: this is the data the network uses to understand if it's doing well during learning\ntest data: this is the data we use to test the network accuracy once it's done learning\n\ndef get_data():\n np.random.seed(1337)\n x_values, y_values = load_digits(return_X_y=True)\n x_values /= x_values.max()\n # reshape to (8 x 8 x 1)\n x_values = x_values.reshape((len(x_values), 8, 8, 1))\n\n # split into train, validation, test\n TRAIN_SPLIT = int(0.6 * len(x_values))\n TEST_SPLIT = int(0.2 * len(x_values) + TRAIN_SPLIT)\n x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])\n y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])\n\n return x_train, x_test, x_validate, y_train, y_test, y_validate\nStep 3. Create and train the model\nNow we have to create our network topology.\nAs I stated earlier, I wanted to keep this as simple as possible (also considering that we're using a toy dataset): I added a single convolution layer (without even max pooling) followed by the output layer.\ndef get_model():\n x_train, x_test, x_validate, y_train, y_test, y_validate = get_data()\n\n # create a CNN\n model = tf.keras.Sequential()\n model.add(layers.Conv2D(8, (3, 3), activation='relu', input_shape=(8, 8, 1)))\n # model.add(layers.MaxPooling2D((2, 2)))\n model.add(layers.Flatten())\n model.add(layers.Dense(len(np.unique(y_train))))\n\n model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])\n model.fit(x_train, y_train, epochs=50, batch_size=16,\n validation_data=(x_validate, y_validate))\n return model, x_test, y_test\nStep 4. Testing the model accuracy\nDo you think this topology is too simple to learn something useful in so few epochs?\nThink again: it achieved 97% accuracy!\nNot bad.\ndef test_model(model, x_test, y_test):\n x_test = (x_test / x_test.max()).reshape((len(x_test), 8, 8, 1))\n y_pred = model.predict(x_test).argmax(axis=1)\n\n print('ACCURACY', (y_pred == y_test).sum() / len(y_test))\nStep 5. Exporting the model\nOnce we have a trained model that performs well, we want to deploy it to our microcontroller. Thanks to the tinymlgen packages, is as easy as a one-liner.\nif __name__ == '__main__':\n model, x_test, y_test = get_model()\n test_model(model, x_test, y_test)\n c_code = port(model, variable_name='digits_model', pretty_print=True)\n print(c_code)\nHow to run a CNN on Arduino and STM32 boards with EloquentTinyML\nOk, now we have the content we need to create an Arduino sketch to run the CNN on our microcontroller.\nWe will use the EloquentTinyML library to do this without pain.\nThis is a library to run TinyML models on your microcontroller without messing around with complex compilation procedures and esoteric errors.\nYou must first install the library at its latest version (0.0.5 or 0.0.4 if not available), either via the Library Manager or directly from Github.\n#include <EloquentTinyML.h>\n\n// copy the printed code from tinymlgen into this file\n#include "digits_model.h"\n\n#define NUMBER_OF_INPUTS 64\n#define NUMBER_OF_OUTPUTS 10\n#define TENSOR_ARENA_SIZE 8*1024\n\nEloquent::TinyML::TfLite<NUMBER_OF_INPUTS, NUMBER_OF_OUTPUTS, TENSOR_ARENA_SIZE> ml;\n\nvoid setup() {\n Serial.begin(115200);\n ml.begin(digits_model);\n}\n\nvoid loop() {\n // a random sample from the MNIST dataset (precisely the last one)\n float x_test[64] = { 0., 0. , 0.625 , 0.875 , 0.5 , 0.0625, 0. , 0. ,\n 0. , 0.125 , 1. , 0.875 , 0.375 , 0.0625, 0. , 0. ,\n 0. , 0. , 0.9375, 0.9375, 0.5 , 0.9375, 0. , 0. ,\n 0. , 0. , 0.3125, 1. , 1. , 0.625 , 0. , 0. ,\n 0. , 0. , 0.75 , 0.9375, 0.9375, 0.75 , 0. , 0. ,\n 0. , 0.25 , 1. , 0.375 , 0.25 , 1. , 0.375 , 0. ,\n 0. , 0.5 , 1. , 0.625 , 0.5 , 1. , 0.5 , 0. ,\n 0. , 0.0625, 0.5 , 0.75 , 0.875 , 0.75 , 0.0625, 0. };\n // the output vector for the model predictions\n float y_pred[10] = {0};\n // the actual class of the sample\n int y_test = 8;\n\n // let's see how long it takes to classify the sample\n uint32_t start = micros();\n\n ml.predict(x_test, y_pred);\n\n uint32_t timeit = micros() - start;\n\n Serial.print("It took ");\n Serial.print(timeit);\n Serial.println(" micros to run inference");\n\n // let's print the raw predictions for all the classes\n // these values are not directly interpretable as probabilities!\n Serial.print("Test output is: ");\n Serial.println(y_test);\n Serial.print("Predicted proba are: ");\n\n for (int i = 0; i < 10; i++) {\n Serial.print(y_pred[i]);\n Serial.print(i == 9 ? '\\n' : ',');\n }\n\n // let's print the "most probable" class\n // you can either use probaToClass() if you also want to use all the probabilities\n Serial.print("Predicted class is: ");\n Serial.println(ml.probaToClass(y_pred));\n // or you can skip the predict() method and call directly predictClass()\n Serial.print("Sanity check: ");\n Serial.println(ml.predictClass(x_test));\n\n delay(1000);\n}\nThat's it: if everything went fine, you should see that the predicted class is 8.\nCNN on Arduino and STM32 figures\nI'll report the figures I get for compiling and running this project on the two boards I used.\n\n\n\nBoard\nFlash\nRAM\nInference time\n\n\n\n\nNucleus L432KC\n154560\nnot available*\n7187\n\n\nArduino Nano 33 BLE Sense\n197656\n56160\n9400\n\n\n\nI used the Grumpyoldpizza compiler for the Nucleus, which doesn't report back the RAM usage\nAnd you?\nWere you able to deploy a CNN to your microcontroller thanks to this tutorial? Or are you having troubles?\nLet me know in the comment and I will help you or share your experience with us.\n\nYou can find the whole code on Github.\nL'articolo TinyML on Arduino and STM32: CNN (Convolutional Neural Network) example proviene da Eloquent Arduino Blog.", "date_published": "2020-11-10T17:37:13+01:00", "date_modified": "2020-11-10T19:10:06+01:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "Senza categoria" ] }, { "id": "https://eloquentarduino.github.io/?p=1264", "url": "https://eloquentarduino.github.io/2020/10/decision-tree-random-forest-and-xgboost-on-arduino/", "title": "Decision Tree, Random Forest and XGBoost on Arduino", "content_html": "You will be surprised by how much accuracy you can achieve in just a few kylobytes of resources: Decision Tree, Random Forest and XGBoost (Extreme Gradient Boosting) are now available on your microcontrollers: highly RAM-optmized implementations for super-fast classification on embedded devices.
\n\n\n
Decision Tree is without doubt one of the most well-known classification algorithms out there. It is so simple to understand that it was probably the first classifier you encountered in any Machine Learning course.
\nI won't go into the details of how a Decision Tree classifier trains and selects the splits for the input features: here I will explain how a RAM-efficient porting of such a classifier is implemented.
\nTo an introduction visit Wikipedia; for a more in-depth guide visit KDNuggets.
\nSince we're willing to sacrifice program space (a.k.a flash) in favor of memory (a.k.a RAM), because RAM is the most scarce resource in the vast majority of microcontrollers, the smart way to port a Decision Tree classifier from Python to C is "hard-coding" the splits in code, without keeping any reference to them into variables.
\nHere's what it looks like for a Decision tree that classifies the Iris dataset.
\nAs you can see, we're using 0 bytes of RAM to get the classification result, since no variable is being allocated. On the other side, the program space will grow almost linearly with the number of splits.
\nSince program space is often much greater than RAM on microcontrollers, this implementation exploits its abundance to be able to deploy larger models. How much large? It will depend on the flash size available: many new generations board (Arduino Nano 33 BLE Sense, ESP32, ST Nucleus...) have 1 Mb of flash, which will hold tens of thousands of splits.
\nRandom Forest is just many Decision Trees joined together in a voting scheme. The core idea is that of "the wisdom of the corwd", such that if many trees vote for a given class (having being trained on different subsets of the training set), that class is probably the true class.
\nTowards Data Science has a more detailed guide on Random Forest and how it balances the trees with thebagging tecnique.
\nAs easy as Decision Trees, Random Forest gets the exact same implementation with 0 bytes of RAM required (it actually needs as many bytes as the number of classes to store the votes, but that's really negligible): it just hard-codes all its composing trees.
\nExtreme Gradient Boosting is "Gradient Boosting on steroids" and has gained much attention from the Machine learning community due to its top results in many data competitions.
\nYou can read the original paper about XGBoost here. For a discursive description head to KDNuggets, if you want some more math refer to this blog post on Medium.
\nIf you followed my earlier posts on Gaussian Naive Bayes, SEFR, Relevant Vector Machine and Support Vector Machines, you already know how to port these new classifiers.
\nIf you're new, you will need a couple things:
\npip install micromlgen
\npip install xgboost
\nmicromlgen.port
function to generate your plain C codefrom micromlgen import port\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.datasets import load_iris\n\nclf = DecisionTreeClassifier()\nX, y = load_iris(return_X_y=True)\nclf.fit(X, y)\nprint(port(clf))
\nYou can then copy-past the C code and import it in your sketch.
\nOnce you have the classifier code, create a new project named TreeClassifierExample
and copy the classifier code into a file named DecisionTree.h
(or RandomForest.h
or XGBoost.h
depending on the model you chose).
The copy the following to the main ino file.
\n#include "DecisionTree.h"\n\nEloquent::ML::Port::DecisionTree clf;\n\nvoid setup() {\n Serial.begin(115200);\n Serial.println("Begin");\n}\n\nvoid loop() {\n float irisSample[4] = {6.2, 2.8, 4.8, 1.8};\n\n Serial.print("Predicted label (you should see '2': ");\n Serial.println(clf.predict(irisSample));\n delay(1000);\n}
\nHow do the 3 classifiers compare against each other?
\nWe will evaluate a few keypoints:
\nfor each classifier on a variety of datasets. I will report the results for RAM and Flash on the Arduino Nano old generation, so you should consider more the relative figures than the absolute ones.
\nDataset | \nClassifier | \nTraining time (s) | \nAccuracy | \nRAM (bytes) | \nFlash (bytes) | \n
---|---|---|---|---|---|
Gas Sensor Array Drift Dataset | \nDecision Tree | \n1,6 | \n0.781 \u00b1 0.12 | \n290 | \n5722 | \n
13910 samples x 128 features | \nRandom Forest | \n3 | \n0.865 \u00b1 0.083 | \n290 | \n6438 | \n
6 classes | \nXGBoost | \n18,8 | \n0.878 \u00b1 0.074 | \n290 | \n6506 | \n
Gesture Phase Segmentation Dataset | \nDecision Tree | \n0,1 | \n0.943 \u00b1 0.005 | \n290 | \n5638 | \n
10000 samples x 19 features | \nRandom Forest | \n0,7 | \n0.970 \u00b1 0.004 | \n306 | \n6466 | \n
5 classes | \nXGBoost | \n18,9 | \n0.969 \u00b1 0.003 | \n306 | \n6536 | \n
Drive Diagnosis Dataset | \nDecision Tree | \n0,6 | \n0.946 \u00b1 0.005 | \n306 | \n5850 | \n
10000 samples x 48 features | \nRandom Forest | \n2,6 | \n0.983 \u00b1 0.003 | \n306 | \n6526 | \n
11 classes | \nXGBoost | \n68,9 | \n0.977 \u00b1 0.005 | \n306 | \n6698 | \n
* all datasets are taken from the UCI Machine Learning datasets archive
\nI'm collecting more data for a complete benchmark, but in the meantime you can see that both Random Forest and XGBoost are on par: if not that XGBoost takes 5 to 25 times longer to train.
\nI've never used XGBoost, so I may be missing some tuning parameters, but for now Random Forest remains my favourite classifier.
\n// example IRIS dataset classification with Decision Tree\nint predict(float *x) {\n if (x[3] <= 0.800000011920929) {\n return 0;\n }\n else {\n if (x[3] <= 1.75) {\n if (x[2] <= 4.950000047683716) {\n if (x[0] <= 5.049999952316284) {\n return 1;\n }\n else {\n return 1;\n }\n }\n else {\n return 2;\n }\n }\n else {\n if (x[2] <= 4.950000047683716) {\n return 2;\n }\n else {\n return 2;\n }\n }\n }\n}
\n// example IRIS dataset classification with Random Forest of 3 trees\n\nint predict(float *x) {\n uint16_t votes[3] = { 0 };\n\n // tree #1\n if (x[0] <= 5.450000047683716) {\n if (x[1] <= 2.950000047683716) {\n votes[1] += 1;\n }\n else {\n votes[0] += 1;\n }\n }\n else {\n if (x[0] <= 6.049999952316284) {\n if (x[3] <= 1.699999988079071) {\n if (x[2] <= 3.549999952316284) {\n votes[0] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[2] += 1;\n }\n }\n else {\n if (x[3] <= 1.699999988079071) {\n if (x[3] <= 1.449999988079071) {\n if (x[0] <= 6.1499998569488525) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[2] += 1;\n }\n }\n }\n\n // tree #2\n if (x[0] <= 5.549999952316284) {\n if (x[2] <= 2.449999988079071) {\n votes[0] += 1;\n }\n else {\n if (x[2] <= 3.950000047683716) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n }\n else {\n if (x[3] <= 1.699999988079071) {\n if (x[1] <= 2.649999976158142) {\n if (x[3] <= 1.25) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n if (x[2] <= 4.1499998569488525) {\n votes[1] += 1;\n }\n else {\n if (x[0] <= 6.75) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n }\n }\n else {\n if (x[0] <= 6.0) {\n votes[2] += 1;\n }\n else {\n votes[2] += 1;\n }\n }\n }\n\n // tree #3\n if (x[3] <= 1.75) {\n if (x[2] <= 2.449999988079071) {\n votes[0] += 1;\n }\n else {\n if (x[2] <= 4.8500001430511475) {\n if (x[0] <= 5.299999952316284) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[1] += 1;\n }\n }\n }\n else {\n if (x[0] <= 5.950000047683716) {\n votes[2] += 1;\n }\n else {\n votes[2] += 1;\n }\n }\n\n // return argmax of votes\n uint8_t classIdx = 0;\n float maxVotes = votes[0];\n\n for (uint8_t i = 1; i < 3; i++) {\n if (votes[i] > maxVotes) {\n classIdx = i;\n maxVotes = votes[i];\n }\n }\n\n return classIdx;\n}
\nL'articolo Decision Tree, Random Forest and XGBoost on Arduino proviene da Eloquent Arduino Blog.
\n", "content_text": "You will be surprised by how much accuracy you can achieve in just a few kylobytes of resources: Decision Tree, Random Forest and XGBoost (Extreme Gradient Boosting) are now available on your microcontrollers: highly RAM-optmized implementations for super-fast classification on embedded devices.\n\n\nDecision Tree\nDecision Tree is without doubt one of the most well-known classification algorithms out there. It is so simple to understand that it was probably the first classifier you encountered in any Machine Learning course.\nI won't go into the details of how a Decision Tree classifier trains and selects the splits for the input features: here I will explain how a RAM-efficient porting of such a classifier is implemented.\nTo an introduction visit Wikipedia; for a more in-depth guide visit KDNuggets.\nSince we're willing to sacrifice program space (a.k.a flash) in favor of memory (a.k.a RAM), because RAM is the most scarce resource in the vast majority of microcontrollers, the smart way to port a Decision Tree classifier from Python to C is "hard-coding" the splits in code, without keeping any reference to them into variables.\nHere's what it looks like for a Decision tree that classifies the Iris dataset.\nAs you can see, we're using 0 bytes of RAM to get the classification result, since no variable is being allocated. On the other side, the program space will grow almost linearly with the number of splits.\nSince program space is often much greater than RAM on microcontrollers, this implementation exploits its abundance to be able to deploy larger models. How much large? It will depend on the flash size available: many new generations board (Arduino Nano 33 BLE Sense, ESP32, ST Nucleus...) have 1 Mb of flash, which will hold tens of thousands of splits. \nRandom Forest\nRandom Forest is just many Decision Trees joined together in a voting scheme. The core idea is that of "the wisdom of the corwd", such that if many trees vote for a given class (having being trained on different subsets of the training set), that class is probably the true class.\nTowards Data Science has a more detailed guide on Random Forest and how it balances the trees with thebagging tecnique.\nAs easy as Decision Trees, Random Forest gets the exact same implementation with 0 bytes of RAM required (it actually needs as many bytes as the number of classes to store the votes, but that's really negligible): it just hard-codes all its composing trees.\nXGBoost (Extreme Gradient Boosting)\nExtreme Gradient Boosting is "Gradient Boosting on steroids" and has gained much attention from the Machine learning community due to its top results in many data competitions.\n\n"gradient boosting" refers to the process of chaining a number of trees so that each tree tries to learn from the errors of the previous\n"extreme" refers to many software and hardware optimizations that greatly reduce the time it takes to train the model\n\nYou can read the original paper about XGBoost here. For a discursive description head to KDNuggets, if you want some more math refer to this blog post on Medium.\nPorting to plain C\nIf you followed my earlier posts on Gaussian Naive Bayes, SEFR, Relevant Vector Machine and Support Vector Machines, you already know how to port these new classifiers.\nIf you're new, you will need a couple things:\n\ninstall the micromlgen package with \n\npip install micromlgen\n\n(optionally, if you want to use Extreme Gradient Boosting) install the xgboost package with \n\npip install xgboost\n\nuse the micromlgen.port function to generate your plain C code\n\nfrom micromlgen import port\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.datasets import load_iris\n\nclf = DecisionTreeClassifier()\nX, y = load_iris(return_X_y=True)\nclf.fit(X, y)\nprint(port(clf))\nYou can then copy-past the C code and import it in your sketch.\nUsing in the Arduino sketch\nOnce you have the classifier code, create a new project named TreeClassifierExample and copy the classifier code into a file named DecisionTree.h (or RandomForest.h or XGBoost.h depending on the model you chose).\nThe copy the following to the main ino file.\n#include "DecisionTree.h"\n\nEloquent::ML::Port::DecisionTree clf;\n\nvoid setup() {\n Serial.begin(115200);\n Serial.println("Begin");\n}\n\nvoid loop() {\n float irisSample[4] = {6.2, 2.8, 4.8, 1.8};\n\n Serial.print("Predicted label (you should see '2': ");\n Serial.println(clf.predict(irisSample));\n delay(1000);\n}\nBechmarks\nHow do the 3 classifiers compare against each other?\nWe will evaluate a few keypoints:\n\ntraining time\naccuracy\nneeded RAM\nneeded Flash\n\nfor each classifier on a variety of datasets. I will report the results for RAM and Flash on the Arduino Nano old generation, so you should consider more the relative figures than the absolute ones.\n\n\n\nDataset\nClassifier\nTraining time (s)\nAccuracy\nRAM (bytes)\nFlash (bytes)\n\n\n\n\nGas Sensor Array Drift Dataset \nDecision Tree\n1,6\n0.781 \u00b1 0.12\n290\n5722\n\n\n13910 samples x 128 features\nRandom Forest\n3\n0.865 \u00b1 0.083\n290\n6438\n\n\n6 classes\nXGBoost\n18,8\n0.878 \u00b1 0.074\n290\n6506\n\n\nGesture Phase Segmentation Dataset\nDecision Tree\n0,1\n0.943 \u00b1 0.005\n290\n5638\n\n\n10000 samples x 19 features\nRandom Forest\n0,7\n0.970 \u00b1 0.004\n306\n6466\n\n\n5 classes\nXGBoost\n18,9\n0.969 \u00b1 0.003\n306\n6536\n\n\nDrive Diagnosis Dataset\nDecision Tree\n0,6\n0.946 \u00b1 0.005\n306\n5850\n\n\n10000 samples x 48 features\nRandom Forest\n2,6\n0.983 \u00b1 0.003\n306\n6526\n\n\n11 classes\nXGBoost\n68,9\n0.977 \u00b1 0.005\n306\n6698\n\n\n\n* all datasets are taken from the UCI Machine Learning datasets archive\nI'm collecting more data for a complete benchmark, but in the meantime you can see that both Random Forest and XGBoost are on par: if not that XGBoost takes 5 to 25 times longer to train.\nI've never used XGBoost, so I may be missing some tuning parameters, but for now Random Forest remains my favourite classifier.\nCode listings\n// example IRIS dataset classification with Decision Tree\nint predict(float *x) {\n if (x[3] <= 0.800000011920929) {\n return 0;\n }\n else {\n if (x[3] <= 1.75) {\n if (x[2] <= 4.950000047683716) {\n if (x[0] <= 5.049999952316284) {\n return 1;\n }\n else {\n return 1;\n }\n }\n else {\n return 2;\n }\n }\n else {\n if (x[2] <= 4.950000047683716) {\n return 2;\n }\n else {\n return 2;\n }\n }\n }\n}\n// example IRIS dataset classification with Random Forest of 3 trees\n\nint predict(float *x) {\n uint16_t votes[3] = { 0 };\n\n // tree #1\n if (x[0] <= 5.450000047683716) {\n if (x[1] <= 2.950000047683716) {\n votes[1] += 1;\n }\n else {\n votes[0] += 1;\n }\n }\n else {\n if (x[0] <= 6.049999952316284) {\n if (x[3] <= 1.699999988079071) {\n if (x[2] <= 3.549999952316284) {\n votes[0] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[2] += 1;\n }\n }\n else {\n if (x[3] <= 1.699999988079071) {\n if (x[3] <= 1.449999988079071) {\n if (x[0] <= 6.1499998569488525) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[2] += 1;\n }\n }\n }\n\n // tree #2\n if (x[0] <= 5.549999952316284) {\n if (x[2] <= 2.449999988079071) {\n votes[0] += 1;\n }\n else {\n if (x[2] <= 3.950000047683716) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n }\n else {\n if (x[3] <= 1.699999988079071) {\n if (x[1] <= 2.649999976158142) {\n if (x[3] <= 1.25) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n if (x[2] <= 4.1499998569488525) {\n votes[1] += 1;\n }\n else {\n if (x[0] <= 6.75) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n }\n }\n else {\n if (x[0] <= 6.0) {\n votes[2] += 1;\n }\n else {\n votes[2] += 1;\n }\n }\n }\n\n // tree #3\n if (x[3] <= 1.75) {\n if (x[2] <= 2.449999988079071) {\n votes[0] += 1;\n }\n else {\n if (x[2] <= 4.8500001430511475) {\n if (x[0] <= 5.299999952316284) {\n votes[1] += 1;\n }\n else {\n votes[1] += 1;\n }\n }\n else {\n votes[1] += 1;\n }\n }\n }\n else {\n if (x[0] <= 5.950000047683716) {\n votes[2] += 1;\n }\n else {\n votes[2] += 1;\n }\n }\n\n // return argmax of votes\n uint8_t classIdx = 0;\n float maxVotes = votes[0];\n\n for (uint8_t i = 1; i < 3; i++) {\n if (votes[i] > maxVotes) {\n classIdx = i;\n maxVotes = votes[i];\n }\n }\n\n return classIdx;\n}\nL'articolo Decision Tree, Random Forest and XGBoost on Arduino proviene da Eloquent Arduino Blog.", "date_published": "2020-10-19T19:31:02+02:00", "date_modified": "2020-12-10T12:26:23+01:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "microml", "ml", "Arduino Machine learning", "Arduino Machine Learning tutorial" ] }, { "id": "https://eloquentarduino.github.io/?p=1297", "url": "https://eloquentarduino.github.io/2020/09/principal-fft-components-as-efficient-features-extrator/", "title": "\u201cPrincipal\u201d FFT components as efficient features extrator", "content_html": "Fourier Transform is probably the most well known algorithm for feature extraction from time-dependent data (in particular speech data), where frequency holds a great deal of information. Sadly, computing the transform over the whole spectrum of the signal still requires O(NlogN) with the best implementation (FFT - Fast Fourier Transform); we would like to achieve faster computation on our microcontrollers.
\nIn this post I propose a partial, naive linear-time implementation of the Fourier Transform you can use to extract features from your data for Machine Learning models.
\n\n\n
DISCLAIMER
\nThe contents of this post represent my own knowledge and are not supported by any academic work (as far as I know). It may really be the case that the findings of my work don't apply to your own projects; yet, I think this idea can turn useful in solving certain kind of problems.
\nFourier transform is used to describe a signal over its entire frequency range. This is useful in a number of applications, but here we're focused on the FT for the sole purpose of extracting features to be used with Machine learning models.
\nFor this reason, we don't actually need a full description of the input signal: we're only interested in extracting some kind of signature that a ML model can use to distinguish among the different classes. Noticing that in a signal spectrum most frequencies have a low magnitude (as you can see in the picture above), the idea to only keep the most important frequencies came to my mind as a mean to speed up the computation on resource constrained microcontrollers.
\nI was thinking to a kind of PCA (Principal Component Analysis), but using FFT spectrum as features.
\nSince we will have a training set with the raw signals, we would like to select the most prominent frequencies among all the samples and apply the computation only on those: even using the naive implementation of FFT, this will yield a linear-time implementation.
\nHow does this Principal FFT compare to, let's say, PCA as a dimensionality reduction algorithm w.r.t model accuracy? Let's see the numbers!
\n\nDownload the Principal FFT benchmark spreadsheet
\nI couldn't find many examples of the kind of datasets I wished to test, but in the image you can see different types of data:
\nWe can note a couple findings:
\nFrom even this simple analysis you should be convinced that Principal FFT can be (under certain cases) a fast, performant features extractor for your projects that involve time-dependant data.
\nI created a Python package to use Principal FFT, called principal-fft
.
pip install principal-fft
\nThe class follows the Transformer
API from scikit-learn
, so it has fit
and transform
methods.
from principalfft import PrincipalFFT\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.datasets import load_digits\nfrom sklearn.ensemble import RandomForestClassifier\n\nmnist = load_digits()\nX, y = mnist.data, mnist.target\nXfft = PrincipalFFT(n_components=10).fit_transform(X)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\nXfft_train, Xfft_test, y_train, y_test = train_test_split(Xfft, y, test_size=0.3)\n\nclf = RandomForestClassifier(50, min_samples_leaf=5).fit(X_train, y_train)\nprint("Raw score", clf.score(X_test, y_test))\n\nclf = RandomForestClassifier(50, min_samples_leaf=5).fit(Xfft_train, y_train)\nprint("FFT score", clf.score(Xfft_test, y_test))
\nMy results are 0.09
for raw data and 0.78
for FFT transformed: quite a big difference!
As with any dimensionality reduction, n_components
is an hyperparameter you have to tune for your specific project: from my experiments, you shouldn't go lower than 8
to achieve a reasonable accuracy.
So, now that we tested our Principal FFT transformer in Python and achieved good results, how do we use it on our microcontroller? Of course with the micromlgen
porter: it is now (version 1.1.9
) able to port PrincipalFFT objects to plain C.
pip install micromlgen==1.1.9
\nWhat does the C code look like?
\nvoid principalFFT(float *features, float *fft) {\n // apply principal FFT (naive implementation for the top 10 frequencies only)\n const int topFrequencies[] = { 0, 8, 17, 16, 1, 9, 2, 7, 15, 6 };\n\n for (int i = 0; i < 10; i++) {\n const int k = topFrequencies[i];\n const float harmonic = 0.09817477042468103 * k;\n float re = 0;\n float im = 0;\n\n // optimized case\n if (k == 0) {\n for (int n = 0; n < 64; n++) {\n re += features[n];\n }\n }\n\n else {\n for (int n = 0; n < 64; n++) {\n const float harmonic_n = harmonic * n;\n const float cos_n = cos(harmonic_n);\n const float sin_n = sin(harmonic_n);\n re += features[n] * cos_n;\n im -= features[n] * sin_n;\n }\n }\n\n fft[i] = sqrt(re * re + im * im);\n }\n}
\nThis is the most direct porting available.
\nIn the Benchmarks section, we'll see how this implementation can be speed-up with alternative implementations.
\nThe following table reports the benchmark on the MNIST dataset (64 features) with 10 principal FFT components vs various tecniques to decrease the computation time at the expense of memory usage.
\nAlgorithm | \nFlash (Kb) | \nExecution time (micros) | \n
---|---|---|
None | \n137420 | \n- | \n
arduinoFFT library | \n147812 | \n3200 | \n
principalFFT | \n151404 | \n4400 | \n
principalFFT w/ cos+sin LUT | \n152124 | \n900 | \n
principalFFT w/ cos LUT + sin sign LUT | \n150220 | \n1250 | \n
*all the benchmarks were run on the Arduino 33 Nano BLE Sense
\nSome thoughts:
\nprincipalFFT w/ cos+sin LUT
means I pre-compute the values of sin
and cos
at compile time, so there's no computation on the board; of course these lookup tables will eat some memoryprincipalFFT w/ cos LUT + sin sign LUT
means I pre-compute the cos
values only and compute sin
using sqrt(1 - cos(x)^2)
; it adds some microseconds to the computation, but requires less memoryarduinoFFT library
is faster than principalFFT
in the execution time and requires less memory, even if principalFFT
is only computing 10 frequencies: I need to investigate how it can achieve such performancesYou can activate the LUT functionality with:
\nfrom micromlgen import port\nfrom principalfft import PrincipalFFT\n\nfft = PrincipalFFT(n_components=10).fit(X)\n\n# cos lookup, sin computed\nport(fft, lookup_cos=True)\n\n# cos + sin lookup\nport(fft, lookup_cos=True, lookup_sin=True)
\nHere's how the C code looks like with LUT.
\nvoid principalFFT(float *features, float *fft) {\n // apply principal FFT (naive implementation for the top N frequencies only)\n const int topFrequencies[] = { 0, 8, 17, 16, 1, 9, 2, 7, 15, 6 };\n const float cosLUT[10][64] = {\n { 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0},\n { 1.0, 0.7071, 6.1232e-17, -0.7071, -1.0, -0.7071, -1.8369e-16, 0.7071, 1.0, 0.7071, 3.0616e-16, -0.7071, -1.0, -0.7071, -4.2862e-16, 0.7071, 1.0, 0.7071, 5.5109e-16, -0.7071, -1.0, -0.7071, -2.4499e-15, 0.7071, 1.0, 0.7071, -9.8033e-16, -0.7071, -1.0, -0.7071, -2.6948e-15, 0.7071, 1.0, 0.7071, -7.3540e-16, -0.7071, -1.0, -0.7071, -2.9397e-15, 0.7071, 1.0, 0.7071, -4.9047e-16, -0.7071, -1.0, -0.7071, -3.1847e-15, 0.7071, 1.0, 0.7071, -2.4554e-16, -0.7071, -1.0, -0.7071, -3.4296e-15, 0.7071, 1.0, 0.7071, -6.1898e-19, -0.7071, -1.0, -0.7071, -3.6745e-15, 0.7071}, ... };\n const bool sinLUT[10][64] = {\n { false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false},\n { false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, false, false, false, false, false, true, true, true, true, false, false, false}, ...};\n\n for (int i = 0; i < 10; i++) {\n const int k = topFrequencies[i];\n const float harmonic = 0.09817477042468103 * k;\n float re = 0;\n float im = 0;\n // optimized case\n if (k == 0) {\n for (int n = 0; n < 64; n++) {\n re += features[n];\n }\n }\n\n else {\n for (int n = 0; n < 64; n++) {\n const float cos_n = cosLUT[i][n];\n const float sin_n = sinLUT[i][n] ? sqrt(1 - cos_n * cos_n) : -sqrt(1 - cos_n * cos_n);\n re += features[n] * cos_n;\n im -= features[n] * sin_n;\n }\n }\n\n fft[i] = sqrt(re * re + im * im);\n }\n}
\n\r\nThis post required much work to be produced, so I hope I didn't forgot anything and you found these information useful.
\nAs always, there's a Github repo with all the code of this post.
L'articolo “Principal” FFT components as efficient features extrator proviene da Eloquent Arduino Blog.
\n", "content_text": "Fourier Transform is probably the most well known algorithm for feature extraction from time-dependent data (in particular speech data), where frequency holds a great deal of information. Sadly, computing the transform over the whole spectrum of the signal still requires O(NlogN) with the best implementation (FFT - Fast Fourier Transform); we would like to achieve faster computation on our microcontrollers.\nIn this post I propose a partial, naive linear-time implementation of the Fourier Transform you can use to extract features from your data for Machine Learning models.\n\n\nTable of contentsTraining-aware FFTAccuracy comparisonHow to use Principal FFT in PythonHow to use Principal FFT in CBenchmarking\nDISCLAIMER\nThe contents of this post represent my own knowledge and are not supported by any academic work (as far as I know). It may really be the case that the findings of my work don't apply to your own projects; yet, I think this idea can turn useful in solving certain kind of problems.\nTraining-aware FFT\nFourier transform is used to describe a signal over its entire frequency range. This is useful in a number of applications, but here we're focused on the FT for the sole purpose of extracting features to be used with Machine learning models.\nFor this reason, we don't actually need a full description of the input signal: we're only interested in extracting some kind of signature that a ML model can use to distinguish among the different classes. Noticing that in a signal spectrum most frequencies have a low magnitude (as you can see in the picture above), the idea to only keep the most important frequencies came to my mind as a mean to speed up the computation on resource constrained microcontrollers.\nI was thinking to a kind of PCA (Principal Component Analysis), but using FFT spectrum as features.\nSince we will have a training set with the raw signals, we would like to select the most prominent frequencies among all the samples and apply the computation only on those: even using the naive implementation of FFT, this will yield a linear-time implementation.\nAccuracy comparison\nHow does this Principal FFT compare to, let's say, PCA as a dimensionality reduction algorithm w.r.t model accuracy? Let's see the numbers!\n\nDownload the Principal FFT benchmark spreadsheet\nI couldn't find many examples of the kind of datasets I wished to test, but in the image you can see different types of data:\n\nhuman activity classification from smartphone data\ngesture classification by IMU data\nMNIST handwritten digits image data\nfree speech audio data\n\nWe can note a couple findings:\n\nPrincipal FFT is almost on par with PCA after a certain number of components\nPrincipalFFT definitely leaves PCA behind on audio data\n\nFrom even this simple analysis you should be convinced that Principal FFT can be (under certain cases) a fast, performant features extractor for your projects that involve time-dependant data.\nHow to use Principal FFT in Python\nI created a Python package to use Principal FFT, called principal-fft.\npip install principal-fft\nThe class follows the Transformer API from scikit-learn, so it has fit and transform methods.\nfrom principalfft import PrincipalFFT\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.datasets import load_digits\nfrom sklearn.ensemble import RandomForestClassifier\n\nmnist = load_digits()\nX, y = mnist.data, mnist.target\nXfft = PrincipalFFT(n_components=10).fit_transform(X)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\nXfft_train, Xfft_test, y_train, y_test = train_test_split(Xfft, y, test_size=0.3)\n\nclf = RandomForestClassifier(50, min_samples_leaf=5).fit(X_train, y_train)\nprint("Raw score", clf.score(X_test, y_test))\n\nclf = RandomForestClassifier(50, min_samples_leaf=5).fit(Xfft_train, y_train)\nprint("FFT score", clf.score(Xfft_test, y_test))\nMy results are 0.09 for raw data and 0.78 for FFT transformed: quite a big difference!\nAs with any dimensionality reduction, n_components is an hyperparameter you have to tune for your specific project: from my experiments, you shouldn't go lower than 8 to achieve a reasonable accuracy.\nHow to use Principal FFT in C\nSo, now that we tested our Principal FFT transformer in Python and achieved good results, how do we use it on our microcontroller? Of course with the micromlgen porter: it is now (version 1.1.9) able to port PrincipalFFT objects to plain C.\npip install micromlgen==1.1.9\nWhat does the C code look like?\nvoid principalFFT(float *features, float *fft) {\n // apply principal FFT (naive implementation for the top 10 frequencies only)\n const int topFrequencies[] = { 0, 8, 17, 16, 1, 9, 2, 7, 15, 6 };\n\n for (int i = 0; i < 10; i++) {\n const int k = topFrequencies[i];\n const float harmonic = 0.09817477042468103 * k;\n float re = 0;\n float im = 0;\n\n // optimized case\n if (k == 0) {\n for (int n = 0; n < 64; n++) {\n re += features[n];\n }\n }\n\n else {\n for (int n = 0; n < 64; n++) {\n const float harmonic_n = harmonic * n;\n const float cos_n = cos(harmonic_n);\n const float sin_n = sin(harmonic_n);\n re += features[n] * cos_n;\n im -= features[n] * sin_n;\n }\n }\n\n fft[i] = sqrt(re * re + im * im);\n }\n}\nThis is the most direct porting available.\nIn the Benchmarks section, we'll see how this implementation can be speed-up with alternative implementations.\nBenchmarking\nThe following table reports the benchmark on the MNIST dataset (64 features) with 10 principal FFT components vs various tecniques to decrease the computation time at the expense of memory usage.\n\n\n\nAlgorithm\nFlash (Kb)\nExecution time (micros)\n\n\n\n\nNone\n137420\n-\n\n\narduinoFFT library\n147812\n3200\n\n\nprincipalFFT\n151404\n4400\n\n\nprincipalFFT w/ cos+sin LUT\n152124\n900\n\n\nprincipalFFT w/ cos LUT + sin sign LUT\n150220\n1250\n\n\n\n*all the benchmarks were run on the Arduino 33 Nano BLE Sense\nSome thoughts:\n\nprincipalFFT w/ cos+sin LUT means I pre-compute the values of sin and cos at compile time, so there's no computation on the board; of course these lookup tables will eat some memory\nprincipalFFT w/ cos LUT + sin sign LUT means I pre-compute the cos values only and compute sin using sqrt(1 - cos(x)^2); it adds some microseconds to the computation, but requires less memory\narduinoFFT library is faster than principalFFT in the execution time and requires less memory, even if principalFFT is only computing 10 frequencies: I need to investigate how it can achieve such performances\n\nYou can activate the LUT functionality with:\nfrom micromlgen import port\nfrom principalfft import PrincipalFFT\n\nfft = PrincipalFFT(n_components=10).fit(X)\n\n# cos lookup, sin computed\nport(fft, lookup_cos=True)\n\n# cos + sin lookup\nport(fft, lookup_cos=True, lookup_sin=True)\nHere's how the C code looks like with LUT.\nvoid principalFFT(float *features, float *fft) {\n // apply principal FFT (naive implementation for the top N frequencies only)\n const int topFrequencies[] = { 0, 8, 17, 16, 1, 9, 2, 7, 15, 6 };\n const float cosLUT[10][64] = {\n { 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0},\n { 1.0, 0.7071, 6.1232e-17, -0.7071, -1.0, -0.7071, -1.8369e-16, 0.7071, 1.0, 0.7071, 3.0616e-16, -0.7071, -1.0, -0.7071, -4.2862e-16, 0.7071, 1.0, 0.7071, 5.5109e-16, -0.7071, -1.0, -0.7071, -2.4499e-15, 0.7071, 1.0, 0.7071, -9.8033e-16, -0.7071, -1.0, -0.7071, -2.6948e-15, 0.7071, 1.0, 0.7071, -7.3540e-16, -0.7071, -1.0, -0.7071, -2.9397e-15, 0.7071, 1.0, 0.7071, -4.9047e-16, -0.7071, -1.0, -0.7071, -3.1847e-15, 0.7071, 1.0, 0.7071, -2.4554e-16, -0.7071, -1.0, -0.7071, -3.4296e-15, 0.7071, 1.0, 0.7071, -6.1898e-19, -0.7071, -1.0, -0.7071, -3.6745e-15, 0.7071}, ... };\n const bool sinLUT[10][64] = {\n { false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false},\n { false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, true, false, false, false, false, true, true, true, false, false, false, false, false, true, true, true, true, false, false, false}, ...};\n\n for (int i = 0; i < 10; i++) {\n const int k = topFrequencies[i];\n const float harmonic = 0.09817477042468103 * k;\n float re = 0;\n float im = 0;\n // optimized case\n if (k == 0) {\n for (int n = 0; n < 64; n++) {\n re += features[n];\n }\n }\n\n else {\n for (int n = 0; n < 64; n++) {\n const float cos_n = cosLUT[i][n];\n const float sin_n = sinLUT[i][n] ? sqrt(1 - cos_n * cos_n) : -sqrt(1 - cos_n * cos_n);\n re += features[n] * cos_n;\n im -= features[n] * sin_n;\n }\n }\n\n fft[i] = sqrt(re * re + im * im);\n }\n}\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\n\nThis post required much work to be produced, so I hope I didn't forgot anything and you found these information useful.\nAs always, there's a Github repo with all the code of this post.\nL'articolo “Principal” FFT components as efficient features extrator proviene da Eloquent Arduino Blog.", "date_published": "2020-09-05T10:52:02+02:00", "date_modified": "2020-09-05T17:14:34+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "microml", "Arduino Machine learning" ] }, { "id": "https://eloquentarduino.github.io/?p=1282", "url": "https://eloquentarduino.github.io/2020/08/better-word-classification-with-arduino-33-ble-sense-and-machine-learning/", "title": "Better word classification with Arduino Nano 33 BLE Sense and Machine Learning", "content_html": "Let's revamp the post I wrote about word classification using Machine Learning on Arduino, this time using a proper microphone (the MP34DT05 mounted on the Arduino Nano 33 BLE Sense) instead of a chinese, analog one: will the results improve?
\n \n\n
Updated on 16 October 2020: step by step explanation of the process with ready-made sketch code
\nThis tutorial will teach you how to capture audio from the Arduino Nano 33 BLE Sense microphone and classify it: at the end of this post, you will have a trained model able to detect in real-time the word you tell, among the ones that you trained it to recognize. The classification will occur directly on your Arduino board.
\nThis is not a general-purpose speech recognizer able to convert speech-to-text: it works only on the words you train it on.
\nHardware
\n\nSoftware
\nTo install the software, open your terminal and install the libraries.
\npip install -U scikit-learn\npip install -U micromlgen
\nFirst of all, we need to capture a bunch of examples of the words we want to recognize.
\nIn the original post, we used an analog microphone to record the audio. It is for sure the easiest way to interact with audio on a microcontroller since you only need to analogRead()
the selected pin to get a value from the sensor.
This semplicity, however, comes at the cost of a nearly inexistent signal pre-processing from the sensor itself: most of the time, you will get junk - I don't want to be rude, but that's it.
\nThe microphone mounted on the Arduino Nano 33 BLE Sense (the MP34DT05) is fortunately much better than this: it gives you access to a modulated signal much more suitable for our processing needs.
\nThe modulation used is pulse-density: I won't try to explain you how this works since I'm not an expert in DSP and neither it is the main scope of this article (refer to Wikipedia for some more information).
\nWhat matters to us is that we can grab an array of bytes from the microphone and extract its Root Mean Square (a.k.a. RMS) to be used as a feature for our Machine Learning model.
\nI had some difficulty finding examples on how to access the microphone on the Arduino Nano 33 BLE Sense board: fortunately, there's a Github repo from DelaGia that shows how to access all the sensors of the board.
\nI extracted the microphone part and incapsulated it in an easy to use class, so you don't really need to dig into the implementation details if you're not interested.
\nWhen loaded on your Arduino Nano 33 BLE Sense, the following sketch will await for you to speak in front of the microphone: once it detects a sound, it will record 64 audio values and print them to the serial monitor.
\nFrom my experience, 64 samples are sufficient to cover short words such as yes, no, play, stop: if you plan to classify longer words, you may need to increase this number.
\nDownload the Arduino Nano 33 BLE Sense - Capture audio samples sketch, open it the Arduino IDE and flash it to your board.
\nHere's the main code.
\n#include "Mic.h"\n\n// tune as per your needs\n#define SAMPLES 64\n#define GAIN (1.0f/50)\n#define SOUND_THRESHOLD 2000\n\nfloat features[SAMPLES];\nMic mic;\n\nvoid setup() {\n Serial.begin(115200);\n PDM.onReceive(onAudio);\n mic.begin();\n delay(3000);\n}\n\nvoid loop() {\n // await for a word to be pronounced\n if (recordAudioSample()) {\n // print features to serial monitor\n for (int i = 0; i < SAMPLES; i++) {\n Serial.print(features[i], 6);\n Serial.print(i == SAMPLES - 1 ? '\\n' : ',');\n }\n\n delay(1000);\n }\n\n delay(20);\n}\n\n/**\n * PDM callback to update mic object\n */\nvoid onAudio() {\n mic.update();\n}\n\n/**\n * Read given number of samples from mic\n */\nbool recordAudioSample() {\n if (mic.hasData() && mic.data() > SOUND_THRESHOLD) {\n\n for (int i = 0; i < SAMPLES; i++) {\n while (!mic.hasData())\n delay(1);\n\n features[i] = mic.pop() * GAIN;\n }\n\n return true;\n }\n\n return false;\n}
\nNow that we have the acquisition logic in place, it's time for you to record some samples of the words you want to classify.
\nNow you have to capture as many samples of the words you want to classify as possible.
\nOpen the serial monitor and pronounce a word near the microphone: a line of numbers will be printed on the monitor.
\nThis is the description of your word.
\nYou need many lines like this for an accurate prediction, so keep repeating the same word 15-30 times.
\nAfter you repeated the same words many times, copy the content of the serial monitor and save it in a CSV file named after the word, for example yes.csv
.
Then clear the serial monitor and repeat the process for each word.
\nKeep all these files in a folder because we need them to train our classifier.
\nNow that we have the samples, it's time to train the classifier.
\nCreate a Python project in your favourite IDE or use your favourite text editor, if you don't have one.
\nAs described in my post about how to train a classifier, we create a Python script that reads all the files inside a folder and concatenates them in a single array you feed to the classifier model.
\nBe sure your folder structure is like the following:
\nArduinoWordClassification\n |-- train_classifier.py\n |-- data/\n |---- yes.csv\n |---- no.csv\n |---- play.csv\n |---- any other .csv file you recorded
\n# file: train_classifier.py\n\nimport numpy as np\nfrom os.path import basename\nfrom glob import glob\nfrom sklearn.svm import SVC\nfrom micromlgen import port\nfrom sklearn.model_selection import train_test_split\n\ndef load_features(folder):\n dataset = None\n classmap = {}\n for class_idx, filename in enumerate(glob('%s/*.csv' % folder)):\n class_name = basename(filename)[:-4]\n classmap[class_idx] = class_name\n samples = np.loadtxt(filename, dtype=float, delimiter=',')\n labels = np.ones((len(samples), 1)) * class_idx\n samples = np.hstack((samples, labels))\n dataset = samples if dataset is None else np.vstack((dataset, samples))\n return dataset, classmap\n\nnp.random.seed(0)\ndataset, classmap = load_features('data')\nX, y = dataset[:, :-1], dataset[:, -1]\n# this line is for testing your accuracy only: once you're satisfied with the results, set test_size to 1\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\nclf = SVC(kernel='poly', degree=2, gamma=0.1, C=100)\nclf.fit(X_train, y_train)\n\nprint('Accuracy', clf.score(X_test, y_test))\nprint('Exported classifier to plain C')\nprint(port(clf, classmap=classmap))
\nAmong the classifiers I tried, SVM produced the best accuracy at 96% with 32 support vectors: it's not a super-tiny model, but it's quite small nevertheless.
\nIf you're not satisifed with SVM, you can use Decision Tree, Random Forest, Gaussian Naive Bayes, Relevant Vector Machines. See my other posts for a detailed description of each.
\nIn your console, after the accuracy score, you will have the plain C implementation of the classifier you trained. The following reports my SVM model.
\n// File: Classifier.h\n\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class SVM {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n float kernels[35] = { 0 };\n float decisions[6] = { 0 };\n int votes[4] = { 0 };\n kernels[0] = compute_kernel(x, 33.0 , 41.0 , 47.0 , 54.0 , 59.0 , 61.0 , 56.0 , 51.0 , 50.0 , 51.0 , 44.0 , 32.0 , 23.0 , 15.0 , 12.0 , 8.0 , 5.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 5.0 , 3.0 , 5.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 );\n kernels[1] = compute_kernel(x, 40.0 , 50.0 , 51.0 , 60.0 , 56.0 , 57.0 , 58.0 , 53.0 , 50.0 , 45.0 , 42.0 , 34.0 , 23.0 , 16.0 , 10.0 , 7.0 , 3.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 14.0 , 3.0 , 8.0 , 0.0 , 0.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 3.0 , 0.0 , 0.0 , 5.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 3.0 , 0.0 , 5.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 3.0 , 0.0 , 0.0 , 0.0 , 3.0 );\n kernels[2] = compute_kernel(x, 56.0 , 68.0 , 78.0 , 91.0 , 84.0 , 84.0 , 84.0 , 74.0 , 69.0 , 64.0 , 57.0 , 44.0 , 33.0 , 18.0 , 12.0 , 8.0 , 5.0 , 9.0 , 15.0 , 12.0 , 12.0 , 9.0 , 12.0 , 7.0 , 3.0 , 10.0 , 12.0 , 6.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 6.0 , 3.0 , 6.0 , 10.0 , 10.0 , 8.0 , 3.0 , 9.0 , 9.0 , 9.0 , 8.0 , 9.0 , 9.0 , 11.0 , 3.0 , 8.0 , 9.0 , 8.0 , 8.0 , 8.0 , 6.0 , 7.0 , 3.0 , 3.0 , 8.0 , 5.0 , 3.0 , 0.0 , 3.0 , 0.0 , 0.0 );\n\n // ...many other kernels computations...\n\n decisions[0] = 0.722587775297\n + kernels[1] * 3.35855e-07\n + kernels[2] * 1.64612e-07\n + kernels[4] * 6.00056e-07\n + kernels[5] * 3.5195e-08\n + kernels[7] * -4.2079e-08\n + kernels[8] * -4.2843e-08\n + kernels[9] * -9.994e-09\n + kernels[10] * -5.11065e-07\n + kernels[11] * -5.979e-09\n + kernels[12] * -4.4672e-08\n + kernels[13] * -1.5606e-08\n + kernels[14] * -1.2941e-08\n + kernels[15] * -2.18903e-07\n + kernels[17] * -2.31635e-07\n ;\n decisions[1] = -1.658344586719\n + kernels[0] * 2.45018e-07\n + kernels[1] * 4.30223e-07\n + kernels[3] * 1.00277e-07\n + kernels[4] * 2.16524e-07\n + kernels[18] * -4.81187e-07\n + kernels[20] * -5.10856e-07\n ;\n decisions[2] = -1.968607562265\n + kernels[0] * 3.001833e-06\n + kernels[3] * 4.5201e-08\n + kernels[4] * 1.54493e-06\n + kernels[5] * 2.81834e-07\n + kernels[25] * -5.93581e-07\n + kernels[26] * -2.89779e-07\n + kernels[27] * -1.73958e-06\n + kernels[28] * -1.09552e-07\n + kernels[30] * -3.09126e-07\n + kernels[31] * -1.294219e-06\n + kernels[32] * -5.37961e-07\n ;\n decisions[3] = -0.720663029823\n + kernels[6] * 1.4362e-08\n + kernels[7] * 6.177e-09\n + kernels[9] * 1.25e-08\n + kernels[10] * 2.05478e-07\n + kernels[12] * 2.501e-08\n + kernels[15] * 4.363e-07\n + kernels[16] * 9.147e-09\n + kernels[18] * -1.82182e-07\n + kernels[20] * -4.93707e-07\n + kernels[21] * -3.3084e-08\n ;\n decisions[4] = -1.605747746589\n + kernels[6] * 6.182e-09\n + kernels[7] * 1.3853e-08\n + kernels[8] * 2.12e-10\n + kernels[9] * 1.1243e-08\n + kernels[10] * 7.80681e-07\n + kernels[15] * 8.347e-07\n + kernels[17] * 1.64985e-07\n + kernels[23] * -4.25014e-07\n + kernels[25] * -1.134803e-06\n + kernels[34] * -2.52038e-07\n ;\n decisions[5] = -0.934328303475\n + kernels[19] * 3.3529e-07\n + kernels[20] * 1.121946e-06\n + kernels[21] * 3.44683e-07\n + kernels[22] * -6.23056e-07\n + kernels[24] * -1.4612e-07\n + kernels[28] * -1.24025e-07\n + kernels[29] * -4.31701e-07\n + kernels[31] * -9.2146e-08\n + kernels[33] * -3.8487e-07\n ;\n votes[decisions[0] > 0 ? 0 : 1] += 1;\n votes[decisions[1] > 0 ? 0 : 2] += 1;\n votes[decisions[2] > 0 ? 0 : 3] += 1;\n votes[decisions[3] > 0 ? 1 : 2] += 1;\n votes[decisions[4] > 0 ? 1 : 3] += 1;\n votes[decisions[5] > 0 ? 2 : 3] += 1;\n int val = votes[0];\n int idx = 0;\n\n for (int i = 1; i < 4; i++) {\n if (votes[i] > val) {\n val = votes[i];\n idx = i;\n }\n }\n\n return idx;\n }\n\n /**\n * Convert class idx to readable name\n */\n const char* predictLabel(float *x) {\n switch (predict(x)) {\n case 0:\n return "no";\n case 1:\n return "stop";\n case 2:\n return "play";\n case 3:\n return "yes";\n default:\n return "Houston we have a problem";\n }\n }\n\n protected:\n /**\n * Compute kernel between feature vector and support vector.\n * Kernel type: poly\n */\n float compute_kernel(float *x, ...) {\n va_list w;\n va_start(w, 64);\n float kernel = 0.0;\n\n for (uint16_t i = 0; i < 64; i++) {\n kernel += x[i] * va_arg(w, double);\n }\n\n return pow((0.1 * kernel) + 0.0, 2);\n }\n };\n }\n }\n}
\nNow we have all the pieces we need to perform word classification on our Arduino board.
\nDownload the Arduino Nano 33 BLE Sense - Audio classification sketch, open it in the Arduino IDE and paste the plain C code you got in the console inside the Classifier.h
file (delete all its contents before!).
Fine: it's time to deploy!
\nHit the upload button: if everything went fine, open the serial monitor and pronounce one of the words you recorded during Step 1
.
Hopefully, you will read the word on the serial monitor.
\nHere's a quick demo (please forgive me for the bad video quality).
\nIf you liked this tutorial and it helped you successfully implement word classification on your Arduino Nano 33 BLE Sense, please share it on your social media so others can benefit too.
\nIf you have troubles or questions, don't hesitate to leave a comment: I will be happy to help you.
\nL'articolo Better word classification with Arduino Nano 33 BLE Sense and Machine Learning proviene da Eloquent Arduino Blog.
\n", "content_text": "Let's revamp the post I wrote about word classification using Machine Learning on Arduino, this time using a proper microphone (the MP34DT05 mounted on the Arduino Nano 33 BLE Sense) instead of a chinese, analog one: will the results improve?\nfrom https://www.udemy.com/course/learn-audio-processing-complete-engineers-course/\n\nUpdated on 16 October 2020: step by step explanation of the process with ready-made sketch code\nTable of contentsWhat you'll learnWhat you'll needStep 1. Capture audio samplesTheory: Pulse-density modulation (a.k.a. PDM)Practice: the code to capture the samplesAction: capture the words examplesStep 2. Train the machine learning modelStep 3. Deploy to your microcontroller\nWhat you'll learn\nThis tutorial will teach you how to capture audio from the Arduino Nano 33 BLE Sense microphone and classify it: at the end of this post, you will have a trained model able to detect in real-time the word you tell, among the ones that you trained it to recognize. The classification will occur directly on your Arduino board.\nThis is not a general-purpose speech recognizer able to convert speech-to-text: it works only on the words you train it on.\nWhat you'll need\n\n\nHardware\n\nArduino Nano 33 BLE Sense\n\n\n\nSoftware\n\nPython\nPython's module scikit-learn\nPython's module micromlgen\n\n\n\nTo install the software, open your terminal and install the libraries.\npip install -U scikit-learn\npip install -U micromlgen\nStep 1. Capture audio samples\nFirst of all, we need to capture a bunch of examples of the words we want to recognize.\nIn the original post, we used an analog microphone to record the audio. It is for sure the easiest way to interact with audio on a microcontroller since you only need to analogRead() the selected pin to get a value from the sensor.\nThis semplicity, however, comes at the cost of a nearly inexistent signal pre-processing from the sensor itself: most of the time, you will get junk - I don't want to be rude, but that's it.\nTheory: Pulse-density modulation (a.k.a. PDM)\nThe microphone mounted on the Arduino Nano 33 BLE Sense (the MP34DT05) is fortunately much better than this: it gives you access to a modulated signal much more suitable for our processing needs.\nThe modulation used is pulse-density: I won't try to explain you how this works since I'm not an expert in DSP and neither it is the main scope of this article (refer to Wikipedia for some more information).\nWhat matters to us is that we can grab an array of bytes from the microphone and extract its Root Mean Square (a.k.a. RMS) to be used as a feature for our Machine Learning model.\nI had some difficulty finding examples on how to access the microphone on the Arduino Nano 33 BLE Sense board: fortunately, there's a Github repo from DelaGia that shows how to access all the sensors of the board.\nI extracted the microphone part and incapsulated it in an easy to use class, so you don't really need to dig into the implementation details if you're not interested.\nPractice: the code to capture the samples\nWhen loaded on your Arduino Nano 33 BLE Sense, the following sketch will await for you to speak in front of the microphone: once it detects a sound, it will record 64 audio values and print them to the serial monitor.\nFrom my experience, 64 samples are sufficient to cover short words such as yes, no, play, stop: if you plan to classify longer words, you may need to increase this number.\nI suggest you keep the words short: longer words will probably decrease the accuracy of the model. If you want nonetheless a longer duration, at least keep the number of words as low as possible\nDownload the Arduino Nano 33 BLE Sense - Capture audio samples sketch, open it the Arduino IDE and flash it to your board.\nHere's the main code.\n#include "Mic.h"\n\n// tune as per your needs\n#define SAMPLES 64\n#define GAIN (1.0f/50)\n#define SOUND_THRESHOLD 2000\n\nfloat features[SAMPLES];\nMic mic;\n\nvoid setup() {\n Serial.begin(115200);\n PDM.onReceive(onAudio);\n mic.begin();\n delay(3000);\n}\n\nvoid loop() {\n // await for a word to be pronounced\n if (recordAudioSample()) {\n // print features to serial monitor\n for (int i = 0; i < SAMPLES; i++) {\n Serial.print(features[i], 6);\n Serial.print(i == SAMPLES - 1 ? '\\n' : ',');\n }\n\n delay(1000);\n }\n\n delay(20);\n}\n\n/**\n * PDM callback to update mic object\n */\nvoid onAudio() {\n mic.update();\n}\n\n/**\n * Read given number of samples from mic\n */\nbool recordAudioSample() {\n if (mic.hasData() && mic.data() > SOUND_THRESHOLD) {\n\n for (int i = 0; i < SAMPLES; i++) {\n while (!mic.hasData())\n delay(1);\n\n features[i] = mic.pop() * GAIN;\n }\n\n return true;\n }\n\n return false;\n}\nNow that we have the acquisition logic in place, it's time for you to record some samples of the words you want to classify. \nAction: capture the words examples\nNow you have to capture as many samples of the words you want to classify as possible.\nOpen the serial monitor and pronounce a word near the microphone: a line of numbers will be printed on the monitor.\nThis is the description of your word.\nYou need many lines like this for an accurate prediction, so keep repeating the same word 15-30 times.\n**My advice**: while recording the samples, vary both the distance of your mounth from the mic and the intensity of your voice: this will produce a more robust classification model later on.\nAfter you repeated the same words many times, copy the content of the serial monitor and save it in a CSV file named after the word, for example yes.csv.\nThen clear the serial monitor and repeat the process for each word.\nKeep all these files in a folder because we need them to train our classifier.\nStep 2. Train the machine learning model\nNow that we have the samples, it's time to train the classifier.\nCreate a Python project in your favourite IDE or use your favourite text editor, if you don't have one.\nAs described in my post about how to train a classifier, we create a Python script that reads all the files inside a folder and concatenates them in a single array you feed to the classifier model.\nBe sure your folder structure is like the following:\nArduinoWordClassification\n |-- train_classifier.py\n |-- data/\n |---- yes.csv\n |---- no.csv\n |---- play.csv\n |---- any other .csv file you recorded\n# file: train_classifier.py\n\nimport numpy as np\nfrom os.path import basename\nfrom glob import glob\nfrom sklearn.svm import SVC\nfrom micromlgen import port\nfrom sklearn.model_selection import train_test_split\n\ndef load_features(folder):\n dataset = None\n classmap = {}\n for class_idx, filename in enumerate(glob('%s/*.csv' % folder)):\n class_name = basename(filename)[:-4]\n classmap[class_idx] = class_name\n samples = np.loadtxt(filename, dtype=float, delimiter=',')\n labels = np.ones((len(samples), 1)) * class_idx\n samples = np.hstack((samples, labels))\n dataset = samples if dataset is None else np.vstack((dataset, samples))\n return dataset, classmap\n\nnp.random.seed(0)\ndataset, classmap = load_features('data')\nX, y = dataset[:, :-1], dataset[:, -1]\n# this line is for testing your accuracy only: once you're satisfied with the results, set test_size to 1\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\nclf = SVC(kernel='poly', degree=2, gamma=0.1, C=100)\nclf.fit(X_train, y_train)\n\nprint('Accuracy', clf.score(X_test, y_test))\nprint('Exported classifier to plain C')\nprint(port(clf, classmap=classmap))\nAmong the classifiers I tried, SVM produced the best accuracy at 96% with 32 support vectors: it's not a super-tiny model, but it's quite small nevertheless.\nIf you're not satisifed with SVM, you can use Decision Tree, Random Forest, Gaussian Naive Bayes, Relevant Vector Machines. See my other posts for a detailed description of each.\nIn your console, after the accuracy score, you will have the plain C implementation of the classifier you trained. The following reports my SVM model.\n// File: Classifier.h\n\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class SVM {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n float kernels[35] = { 0 };\n float decisions[6] = { 0 };\n int votes[4] = { 0 };\n kernels[0] = compute_kernel(x, 33.0 , 41.0 , 47.0 , 54.0 , 59.0 , 61.0 , 56.0 , 51.0 , 50.0 , 51.0 , 44.0 , 32.0 , 23.0 , 15.0 , 12.0 , 8.0 , 5.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 5.0 , 3.0 , 5.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 );\n kernels[1] = compute_kernel(x, 40.0 , 50.0 , 51.0 , 60.0 , 56.0 , 57.0 , 58.0 , 53.0 , 50.0 , 45.0 , 42.0 , 34.0 , 23.0 , 16.0 , 10.0 , 7.0 , 3.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 14.0 , 3.0 , 8.0 , 0.0 , 0.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 3.0 , 0.0 , 0.0 , 5.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 3.0 , 0.0 , 5.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 3.0 , 0.0 , 0.0 , 0.0 , 3.0 );\n kernels[2] = compute_kernel(x, 56.0 , 68.0 , 78.0 , 91.0 , 84.0 , 84.0 , 84.0 , 74.0 , 69.0 , 64.0 , 57.0 , 44.0 , 33.0 , 18.0 , 12.0 , 8.0 , 5.0 , 9.0 , 15.0 , 12.0 , 12.0 , 9.0 , 12.0 , 7.0 , 3.0 , 10.0 , 12.0 , 6.0 , 3.0 , 0.0 , 0.0 , 0.0 , 0.0 , 6.0 , 3.0 , 6.0 , 10.0 , 10.0 , 8.0 , 3.0 , 9.0 , 9.0 , 9.0 , 8.0 , 9.0 , 9.0 , 11.0 , 3.0 , 8.0 , 9.0 , 8.0 , 8.0 , 8.0 , 6.0 , 7.0 , 3.0 , 3.0 , 8.0 , 5.0 , 3.0 , 0.0 , 3.0 , 0.0 , 0.0 );\n\n // ...many other kernels computations...\n\n decisions[0] = 0.722587775297\n + kernels[1] * 3.35855e-07\n + kernels[2] * 1.64612e-07\n + kernels[4] * 6.00056e-07\n + kernels[5] * 3.5195e-08\n + kernels[7] * -4.2079e-08\n + kernels[8] * -4.2843e-08\n + kernels[9] * -9.994e-09\n + kernels[10] * -5.11065e-07\n + kernels[11] * -5.979e-09\n + kernels[12] * -4.4672e-08\n + kernels[13] * -1.5606e-08\n + kernels[14] * -1.2941e-08\n + kernels[15] * -2.18903e-07\n + kernels[17] * -2.31635e-07\n ;\n decisions[1] = -1.658344586719\n + kernels[0] * 2.45018e-07\n + kernels[1] * 4.30223e-07\n + kernels[3] * 1.00277e-07\n + kernels[4] * 2.16524e-07\n + kernels[18] * -4.81187e-07\n + kernels[20] * -5.10856e-07\n ;\n decisions[2] = -1.968607562265\n + kernels[0] * 3.001833e-06\n + kernels[3] * 4.5201e-08\n + kernels[4] * 1.54493e-06\n + kernels[5] * 2.81834e-07\n + kernels[25] * -5.93581e-07\n + kernels[26] * -2.89779e-07\n + kernels[27] * -1.73958e-06\n + kernels[28] * -1.09552e-07\n + kernels[30] * -3.09126e-07\n + kernels[31] * -1.294219e-06\n + kernels[32] * -5.37961e-07\n ;\n decisions[3] = -0.720663029823\n + kernels[6] * 1.4362e-08\n + kernels[7] * 6.177e-09\n + kernels[9] * 1.25e-08\n + kernels[10] * 2.05478e-07\n + kernels[12] * 2.501e-08\n + kernels[15] * 4.363e-07\n + kernels[16] * 9.147e-09\n + kernels[18] * -1.82182e-07\n + kernels[20] * -4.93707e-07\n + kernels[21] * -3.3084e-08\n ;\n decisions[4] = -1.605747746589\n + kernels[6] * 6.182e-09\n + kernels[7] * 1.3853e-08\n + kernels[8] * 2.12e-10\n + kernels[9] * 1.1243e-08\n + kernels[10] * 7.80681e-07\n + kernels[15] * 8.347e-07\n + kernels[17] * 1.64985e-07\n + kernels[23] * -4.25014e-07\n + kernels[25] * -1.134803e-06\n + kernels[34] * -2.52038e-07\n ;\n decisions[5] = -0.934328303475\n + kernels[19] * 3.3529e-07\n + kernels[20] * 1.121946e-06\n + kernels[21] * 3.44683e-07\n + kernels[22] * -6.23056e-07\n + kernels[24] * -1.4612e-07\n + kernels[28] * -1.24025e-07\n + kernels[29] * -4.31701e-07\n + kernels[31] * -9.2146e-08\n + kernels[33] * -3.8487e-07\n ;\n votes[decisions[0] > 0 ? 0 : 1] += 1;\n votes[decisions[1] > 0 ? 0 : 2] += 1;\n votes[decisions[2] > 0 ? 0 : 3] += 1;\n votes[decisions[3] > 0 ? 1 : 2] += 1;\n votes[decisions[4] > 0 ? 1 : 3] += 1;\n votes[decisions[5] > 0 ? 2 : 3] += 1;\n int val = votes[0];\n int idx = 0;\n\n for (int i = 1; i < 4; i++) {\n if (votes[i] > val) {\n val = votes[i];\n idx = i;\n }\n }\n\n return idx;\n }\n\n /**\n * Convert class idx to readable name\n */\n const char* predictLabel(float *x) {\n switch (predict(x)) {\n case 0:\n return "no";\n case 1:\n return "stop";\n case 2:\n return "play";\n case 3:\n return "yes";\n default:\n return "Houston we have a problem";\n }\n }\n\n protected:\n /**\n * Compute kernel between feature vector and support vector.\n * Kernel type: poly\n */\n float compute_kernel(float *x, ...) {\n va_list w;\n va_start(w, 64);\n float kernel = 0.0;\n\n for (uint16_t i = 0; i < 64; i++) {\n kernel += x[i] * va_arg(w, double);\n }\n\n return pow((0.1 * kernel) + 0.0, 2);\n }\n };\n }\n }\n}\nStep 3. Deploy to your microcontroller\nNow we have all the pieces we need to perform word classification on our Arduino board.\nDownload the Arduino Nano 33 BLE Sense - Audio classification sketch, open it in the Arduino IDE and paste the plain C code you got in the console inside the Classifier.h file (delete all its contents before!).\nFine: it's time to deploy!\nHit the upload button: if everything went fine, open the serial monitor and pronounce one of the words you recorded during Step 1.\nHopefully, you will read the word on the serial monitor.\nHere's a quick demo (please forgive me for the bad video quality).\n\nhttps://eloquentarduino.github.io/wp-content/uploads/2020/08/Arduino-Nano-33-BLE-Sense-Word-classification.mp4\n\nIf you liked this tutorial and it helped you successfully implement word classification on your Arduino Nano 33 BLE Sense, please share it on your social media so others can benefit too.\nIf you have troubles or questions, don't hesitate to leave a comment: I will be happy to help you.\nL'articolo Better word classification with Arduino Nano 33 BLE Sense and Machine Learning proviene da Eloquent Arduino Blog.", "date_published": "2020-08-24T19:04:57+02:00", "date_modified": "2020-10-17T17:50:13+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "microml", "ml", "Arduino Machine learning" ], "attachments": [ { "url": "https://eloquentarduino.github.io/wp-content/uploads/2020/08/Arduino-Nano-33-BLE-Sense-Word-classification.mp4", "mime_type": "video/mp4", "size_in_bytes": 5594095 } ] }, { "id": "https://eloquentarduino.github.io/?p=1237", "url": "https://eloquentarduino.com/projects/arduino-indoor-positioning", "title": "The Ultimate Guide to Wifi Indoor Positioning using Arduino and Machine Learning", "content_html": "This will be the most detailed, easy to follow tutorial over the Web on how to implement Wifi indoor positioning using an Arduino microcontroller and Machine Learning. It contains all the steps, tools and code from the start to the end of the project.
\n
\nri-elaborated from https://www.accuware.com/blog/ambient-signals-plus-video-images/
\n
My original post abot Wifi indoor positioning is one of my top-performing post of all time (after motion detection using ESP32 camera and the introductory post on Machine Learning for Arduino). This is why I settled to revamp it and add some more details, tools and scripts to create the most complete free guide on how to implement such a system, from the beginning to the end.
\nThis post will cover all the necessary steps and provide all the code you need: for an introduction to the topic, I point you to the original post.
\nThis part stays the same as the original post: we will use the RSSIs (signal strength) of the nearby Wifi hotspots to classifiy which location we're in.
\nEach location will "see" a certain number of networks, each with a RSSI that will stay mostly the same: the unique combination of these RSSIs will become a fingerprint to distinguish the locations from one another.
\nSince not all networks will be visible all the time, the shape of our data will be more likely a sparse matrix.
\nA sparse matrix is a matrix where most of the elements will be zero, meaning the absence of the given feature. Only the relevant elements will be non-zero and will represent the RSSI of the nth network.
The following example table should give you an idea of what our data will look like.
\nLocation | \nNet #1 | \nNet #2 | \nNet #3 | \nNet #4 | \nNet #5 | \nNet #6 | \nNet #7 | \n
---|---|---|---|---|---|---|---|
Kitchen/1 | \n50 | \n30 | \n60 | \n0 | \n0 | \n0 | \n0 | \n
Kitchen/2 | \n55 | \n30 | \n55 | \n0 | \n0 | \n5 | \n0 | \n
Kitchen/3 | \n50 | \n35 | \n65 | \n0 | \n0 | \n0 | \n5 | \n
Bedroom/1 | \n0 | \n80 | \n0 | \n80 | \n0 | \n40 | \n40 | \n
Bedroom/2 | \n0 | \n80 | \n0 | \n85 | \n10 | \n20 | \n20 | \n
Bedroom/3 | \n0 | \n70 | \n0 | \n85 | \n0 | \n30 | \n40 | \n
Bathroom/1 | \n0 | \n0 | \n30 | \n80 | \n80 | \n0 | \n0 | \n
Bathroom/2 | \n0 | \n0 | \n10 | \n90 | \n85 | \n0 | \n0 | \n
Bathroom/3 | \n0 | \n0 | \n30 | \n90 | \n90 | \n5 | \n0 | \n
Even though the numbers in this table are fake, you should recognize a pattern:
\nOur machine learning algorithm should be able to extract each location's fingerprint without being fooled by this inconsistent features.
\nNow that we know what our data should look like, we need to first get it.
\nIn the original post, this point was the one I'm unhappy with since it's not as straight-forward as I would have liked. The method I present you in this post, instead, is by far way simpler to follow.
\nFirst of all, you will need a Wifi equipped board. I will use an Arduino MKR WiFi 1010, but any ESP8266 / ESP32 or the like will work.
\nThe following sketch will do the job: it scans the visible networks at a regular interval and prints their RSSIs encoded in JSON format.
\n// file DataGathering.h\n\n#include "WiFi.h"\n\n#define print(string) Serial.print(string);\n#define quote(string) print('"'); print(string); print('"');\n\nString location = "";\n\n/**\n * \n */\nvoid setup() {\n Serial.begin(115200);\n delay(3000);\n WiFi.disconnect();\n}\n\n/**\n * \n */\nvoid loop() { \n // if location is set, scan networks\n if (location != "") {\n int numNetworks = WiFi.scanNetworks();\n\n // print location\n print('{');\n quote("__location");\n print(": ");\n quote(location);\n print(", ");\n\n // print each network SSID and RSSI\n for (int i = 0; i < numNetworks; i++) {\n quote(WiFi.SSID(i));\n print(": ");\n print(WiFi.RSSI(i));\n print(i == numNetworks - 1 ? "}\\n" : ", ");\n }\n\n delay(1000);\n }\n // else wait for user to enter the location\n else {\n String input;\n\n Serial.println("Enter 'scan {location}' to start the scanning");\n\n while (!Serial.available())\n delay(200);\n\n input = Serial.readStringUntil('\\n');\n\n if (input.indexOf("scan ") == 0) {\n input.replace("scan ", "");\n location = input;\n }\n else {\n location = "";\n }\n }\n}
\nUpload the sketch to your board and start mapping your house / office: go to the target location and type scan {location}
in the serial monitor, where {location}
is the name you want to give to the current location (so, for example, if you're mapping the kitchen, type scan kitchen
).
Move around the room a bit so you capture a few variations of the visible hotspots: this will lead to a more robust classification later on.
\nTo stop the recording just type stop
in the serial monitor.
Now repeat this process for each location you want to classify. At this point you should have ended with something similar to the following:
\n{"__location": "Kitchen", "N1": 100, "N2": 50}\n{"__location": "Bedroom", "N3": 100, "N2": 50}\n{"__location": "Bathroom", "N1": 100, "N4": 50}\n{"__location": "Bathroom", "N5": 100, "N4": 50}
\nIn your case, "N1", "N2"... will contain the name of the visible networks.
\nWhen you're happy with your training data, it's time to convert it to something useful.
\nGiven the data we have, we want to generate C code that can convert a Wifi scan result into a feature vector we can use for classification.
\nSince I'm a fan of code-generators, I wrote one specifically for this very project. And since I already have a code-generator library I use for Machine Learning code written in Python, I updated it with this new functionality.
\nStart by installing the library.
\n# be sure it installs version >= 1.1.8\npip install --upgrade micromlgen
\nNow create a script with the following code:
\nfrom micromlgen import port_wifi_indoor_positioning\n\nif __name__ == '__main__':\n samples = '''\n {"__location": "Kitchen", "N1": 100, "N2": 50}\n {"__location": "Bedroom", "N3": 100, "N2": 50}\n {"__location": "Bathroom", "N1": 100, "N4": 50}\n {"__location": "Bathroom", "N5": 100, "N4": 50}\n '''\n X, y, classmap, converter_code = port_wifi_indoor_positioning(samples)\n print(converter_code)
\nOf course you have to replace the samples
content with the output you got in the previous step.
In the console you should see a C++ class we will use later in the Arduino sketch. The class should be similar to the following example code.
\n// Save this code in your sketch as Converter.h\n\n#pragma once\nnamespace Eloquent {\n namespace Projects {\n class WifiIndoorPositioning {\n public:\n /**\n * Get feature vector\n */\n float* getFeatures() {\n static float features[5] = {0};\n uint8_t numNetworks = WiFi.scanNetworks();\n\n for (uint8_t i = 0; i < 5; i++) {\n features[i] = 0;\n }\n\n for (uint8_t i = 0; i < numNetworks; i++) {\n int featureIdx = ssidToFeatureIdx(WiFi.SSID(i));\n\n if (featureIdx >= 0) {\n features[featureIdx] = WiFi.RSSI(i);\n }\n }\n\n return features;\n }\n\n protected:\n /**\n * Convert SSID to featureIdx\n */\n int ssidToFeatureIdx(String ssid) {\n if (ssid.equals("N1"))\n return 0;\n\n if (ssid.equals("N2"))\n return 1;\n\n if (ssid.equals("N3"))\n return 2;\n\n if (ssid.equals("N4"))\n return 3;\n\n if (ssid.equals("N5"))\n return 4;\n\n return -1;\n }\n };\n }\n }
\nI will briefly explain what it does: when you call getFeatures()
, it runs a Wifi scan and for each network it finds, it fills the corresponding element in the feature vector (if the network is a known one).
At the end of the procedure, your feature vector will look something like [0, 10, 0, 0, 50, 0, 0]
, each element representing the RSSI of a given network.
To close the loop of the project, we need to be able to classify the features vector into one of the recorded location. Since we already have micromlgen
installed, it will be very easy to do so.
Let's update the Python code we already have: this time, instead of printing the converter code, we will print the classifier code.
\n# install ml package first\npip install scikit-learn
\nfrom sklearn.tree import DecisionTreeClassifier\nfrom micromlgen import port_wifi_indoor_positioning, port\n\nif __name__ == '__main__':\n samples = '''\n {"__location": "Kitchen", "N1": 100, "N2": 50}\n {"__location": "Bedroom", "N3": 100, "N2": 50}\n {"__location": "Bathroom", "N1": 100, "N4": 50}\n {"__location": "Bathroom", "N5": 100, "N4": 50}\n '''\n X, y, classmap, converter_code = port_wifi_indoor_positioning(samples)\n clf = DecisionTreeClassifier()\n clf.fit(X, y)\n print(port(clf, classmap=classmap))
\nHere I chose Decision tree because it is a very lightweight algorithm and should work fine for the kind of features we're working with.
\nIf you're not satisfied with the results, you can try to use SVM or Gaussian Naive Bayes, which are both supported by micromlgen
.
In the console you will see the generated code for the classifier you trained. In the case of DecisionTree
the code will look like the following.
// Save this code in your sketch as Classifier.h\n\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class DecisionTree {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n if (x[2] <= 25.0) {\n if (x[4] <= 50.0) {\n return 1;\n }\n\n else {\n return 2;\n }\n }\n\n else {\n return 0;\n }\n }\n\n /**\n * Convert class idx to readable name\n */\n const char* predictLabel(float *x) {\n switch (predict(x)) {\n case 0:\n return "Bathroom";\n case 1:\n return "Bedroom";\n case 2:\n return "Kitchen";\n default:\n return "Houston we have a problem";\n }\n }\n\n protected:\n };\n }\n }\n }
\nNow that we have all the pieces together, we only need to merge them to get a complete working example.
\n// file WifiIndoorPositioning.h\n\n#include "WiFi.h"\n#include "Converter.h"\n#include "Classifier.h"\n\nEloquent::Projects::WifiIndoorPositioning positioning;\nEloquent::ML::Port::DecisionTree classifier;\n\nvoid setup() {\n Serial.begin(115200);\n}\n\nvoid loop() {\n Serial.print("You're in ");\n Serial.println(classifier.predictLabel(positioning.getFeatures()));\n delay(3000);\n}
\nTo the bare minimum, the above code runs the scan and tells you which location you're in. That's it.
\nThis system should be pretty accurate and robust if you properly gather the data, though I can quantify how much accurate.
\nThis is not an indoor navigation system: it can't tell you "the coordinates" of where you are, it can only detect in which room you're in.
\nIf your location lack of nearby Wifi hotspots, an easy and cheap solution would be to spawn a bunch of ESP8266 / ESP32 boards around your house each acting as Access Point: with this simple trick you should be able to be as accurate as needed by just adding more boards.
\n\r\nWith this in-depth tutorial I hope I helped you going from start to end of setting up a Wifi indoor positioning system using cheap hardware as ESP8266 / ESP32 boards and the Arduino IDE.
\nAs you can see, Machine learning has not to be intimidating even for beginners: you just need the right tools to get the job done.
\nIf this guide excited you about Machine learning on microcontrollers, I invite you to read the many other posts I wrote on the topic and share them on the socials.
\nYou can find the whole project on Github. Don't forget to star the repo if you like it.
\nL'articolo The Ultimate Guide to Wifi Indoor Positioning using Arduino and Machine Learning proviene da Eloquent Arduino Blog.
\n", "content_text": "This will be the most detailed, easy to follow tutorial over the Web on how to implement Wifi indoor positioning using an Arduino microcontroller and Machine Learning. It contains all the steps, tools and code from the start to the end of the project.\n\nri-elaborated from https://www.accuware.com/blog/ambient-signals-plus-video-images/\n\nMy original post abot Wifi indoor positioning is one of my top-performing post of all time (after motion detection using ESP32 camera and the introductory post on Machine Learning for Arduino). This is why I settled to revamp it and add some more details, tools and scripts to create the most complete free guide on how to implement such a system, from the beginning to the end.\nThis post will cover all the necessary steps and provide all the code you need: for an introduction to the topic, I point you to the original post.\nTable of contentsFeatures definitionData gatheringGenerating the features converterGenerating the classifierWrapping it all togetherDisclaimer\nFeatures definition\nThis part stays the same as the original post: we will use the RSSIs (signal strength) of the nearby Wifi hotspots to classifiy which location we're in.\nEach location will "see" a certain number of networks, each with a RSSI that will stay mostly the same: the unique combination of these RSSIs will become a fingerprint to distinguish the locations from one another.\nSince not all networks will be visible all the time, the shape of our data will be more likely a sparse matrix.\nA sparse matrix is a matrix where most of the elements will be zero, meaning the absence of the given feature. Only the relevant elements will be non-zero and will represent the RSSI of the nth network.\nThe following example table should give you an idea of what our data will look like.\n\n\n\nLocation\nNet #1\nNet #2\nNet #3\nNet #4\nNet #5\nNet #6\nNet #7\n\n\n\n\nKitchen/1\n50\n30\n60\n0\n0\n0\n0\n\n\nKitchen/2\n55\n30\n55\n0\n0\n5\n0\n\n\nKitchen/3\n50\n35\n65\n0\n0\n0\n5\n\n\nBedroom/1\n0\n80\n0\n80\n0\n40\n40\n\n\nBedroom/2\n0\n80\n0\n85\n10\n20\n20\n\n\nBedroom/3\n0\n70\n0\n85\n0\n30\n40\n\n\nBathroom/1\n0\n0\n30\n80\n80\n0\n0\n\n\nBathroom/2\n0\n0\n10\n90\n85\n0\n0\n\n\nBathroom/3\n0\n0\n30\n90\n90\n5\n0\n\n\n\nEven though the numbers in this table are fake, you should recognize a pattern:\n\neach location is characterized by a certain combination of always-visible networks\nsome sample could be "noised" by weak networks (the 5 in the table)\n\nOur machine learning algorithm should be able to extract each location's fingerprint without being fooled by this inconsistent features.\nData gathering\nNow that we know what our data should look like, we need to first get it.\nIn the original post, this point was the one I'm unhappy with since it's not as straight-forward as I would have liked. The method I present you in this post, instead, is by far way simpler to follow.\nFirst of all, you will need a Wifi equipped board. I will use an Arduino MKR WiFi 1010, but any ESP8266 / ESP32 or the like will work.\nThe following sketch will do the job: it scans the visible networks at a regular interval and prints their RSSIs encoded in JSON format.\n// file DataGathering.h\n\n#include "WiFi.h"\n\n#define print(string) Serial.print(string);\n#define quote(string) print('"'); print(string); print('"');\n\nString location = "";\n\n/**\n * \n */\nvoid setup() {\n Serial.begin(115200);\n delay(3000);\n WiFi.disconnect();\n}\n\n/**\n * \n */\nvoid loop() { \n // if location is set, scan networks\n if (location != "") {\n int numNetworks = WiFi.scanNetworks();\n\n // print location\n print('{');\n quote("__location");\n print(": ");\n quote(location);\n print(", ");\n\n // print each network SSID and RSSI\n for (int i = 0; i < numNetworks; i++) {\n quote(WiFi.SSID(i));\n print(": ");\n print(WiFi.RSSI(i));\n print(i == numNetworks - 1 ? "}\\n" : ", ");\n }\n\n delay(1000);\n }\n // else wait for user to enter the location\n else {\n String input;\n\n Serial.println("Enter 'scan {location}' to start the scanning");\n\n while (!Serial.available())\n delay(200);\n\n input = Serial.readStringUntil('\\n');\n\n if (input.indexOf("scan ") == 0) {\n input.replace("scan ", "");\n location = input;\n }\n else {\n location = "";\n }\n }\n}\nUpload the sketch to your board and start mapping your house / office: go to the target location and type scan {location} in the serial monitor, where {location}is the name you want to give to the current location (so, for example, if you're mapping the kitchen, type scan kitchen).\nMove around the room a bit so you capture a few variations of the visible hotspots: this will lead to a more robust classification later on.\nTo stop the recording just type stop in the serial monitor.\nNow repeat this process for each location you want to classify. At this point you should have ended with something similar to the following:\n{"__location": "Kitchen", "N1": 100, "N2": 50}\n{"__location": "Bedroom", "N3": 100, "N2": 50}\n{"__location": "Bathroom", "N1": 100, "N4": 50}\n{"__location": "Bathroom", "N5": 100, "N4": 50}\nIn your case, "N1", "N2"... will contain the name of the visible networks.\nWhen you're happy with your training data, it's time to convert it to something useful.\nGenerating the features converter\nGiven the data we have, we want to generate C code that can convert a Wifi scan result into a feature vector we can use for classification.\nSince I'm a fan of code-generators, I wrote one specifically for this very project. And since I already have a code-generator library I use for Machine Learning code written in Python, I updated it with this new functionality.\nYou must have Python installed on your system\nStart by installing the library.\n# be sure it installs version >= 1.1.8\npip install --upgrade micromlgen\nNow create a script with the following code:\nfrom micromlgen import port_wifi_indoor_positioning\n\nif __name__ == '__main__':\n samples = '''\n {"__location": "Kitchen", "N1": 100, "N2": 50}\n {"__location": "Bedroom", "N3": 100, "N2": 50}\n {"__location": "Bathroom", "N1": 100, "N4": 50}\n {"__location": "Bathroom", "N5": 100, "N4": 50}\n '''\n X, y, classmap, converter_code = port_wifi_indoor_positioning(samples)\n print(converter_code)\nOf course you have to replace the samples content with the output you got in the previous step. \nIn the console you should see a C++ class we will use later in the Arduino sketch. The class should be similar to the following example code.\n// Save this code in your sketch as Converter.h\n\n#pragma once\nnamespace Eloquent {\n namespace Projects {\n class WifiIndoorPositioning {\n public:\n /**\n * Get feature vector\n */\n float* getFeatures() {\n static float features[5] = {0};\n uint8_t numNetworks = WiFi.scanNetworks();\n\n for (uint8_t i = 0; i < 5; i++) {\n features[i] = 0;\n }\n\n for (uint8_t i = 0; i < numNetworks; i++) {\n int featureIdx = ssidToFeatureIdx(WiFi.SSID(i));\n\n if (featureIdx >= 0) {\n features[featureIdx] = WiFi.RSSI(i);\n }\n }\n\n return features;\n }\n\n protected:\n /**\n * Convert SSID to featureIdx\n */\n int ssidToFeatureIdx(String ssid) {\n if (ssid.equals("N1"))\n return 0;\n\n if (ssid.equals("N2"))\n return 1;\n\n if (ssid.equals("N3"))\n return 2;\n\n if (ssid.equals("N4"))\n return 3;\n\n if (ssid.equals("N5"))\n return 4;\n\n return -1;\n }\n };\n }\n }\nI will briefly explain what it does: when you call getFeatures(), it runs a Wifi scan and for each network it finds, it fills the corresponding element in the feature vector (if the network is a known one).\nAt the end of the procedure, your feature vector will look something like [0, 10, 0, 0, 50, 0, 0], each element representing the RSSI of a given network.\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nGenerating the classifier\nTo close the loop of the project, we need to be able to classify the features vector into one of the recorded location. Since we already have micromlgen installed, it will be very easy to do so.\nLet's update the Python code we already have: this time, instead of printing the converter code, we will print the classifier code.\n# install ml package first\npip install scikit-learn\nfrom sklearn.tree import DecisionTreeClassifier\nfrom micromlgen import port_wifi_indoor_positioning, port\n\nif __name__ == '__main__':\n samples = '''\n {"__location": "Kitchen", "N1": 100, "N2": 50}\n {"__location": "Bedroom", "N3": 100, "N2": 50}\n {"__location": "Bathroom", "N1": 100, "N4": 50}\n {"__location": "Bathroom", "N5": 100, "N4": 50}\n '''\n X, y, classmap, converter_code = port_wifi_indoor_positioning(samples)\n clf = DecisionTreeClassifier()\n clf.fit(X, y)\n print(port(clf, classmap=classmap))\nHere I chose Decision tree because it is a very lightweight algorithm and should work fine for the kind of features we're working with.\nIf you're not satisfied with the results, you can try to use SVM or Gaussian Naive Bayes, which are both supported by micromlgen.\nIn the console you will see the generated code for the classifier you trained. In the case of DecisionTree the code will look like the following.\n// Save this code in your sketch as Classifier.h\n\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class DecisionTree {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n if (x[2] <= 25.0) {\n if (x[4] <= 50.0) {\n return 1;\n }\n\n else {\n return 2;\n }\n }\n\n else {\n return 0;\n }\n }\n\n /**\n * Convert class idx to readable name\n */\n const char* predictLabel(float *x) {\n switch (predict(x)) {\n case 0:\n return "Bathroom";\n case 1:\n return "Bedroom";\n case 2:\n return "Kitchen";\n default:\n return "Houston we have a problem";\n }\n }\n\n protected:\n };\n }\n }\n }\nWrapping it all together\nNow that we have all the pieces together, we only need to merge them to get a complete working example.\n// file WifiIndoorPositioning.h\n\n#include "WiFi.h"\n#include "Converter.h"\n#include "Classifier.h"\n\nEloquent::Projects::WifiIndoorPositioning positioning;\nEloquent::ML::Port::DecisionTree classifier;\n\nvoid setup() {\n Serial.begin(115200);\n}\n\nvoid loop() {\n Serial.print("You're in ");\n Serial.println(classifier.predictLabel(positioning.getFeatures()));\n delay(3000);\n}\nTo the bare minimum, the above code runs the scan and tells you which location you're in. That's it.\nDisclaimer\nThis system should be pretty accurate and robust if you properly gather the data, though I can quantify how much accurate.\nThis is not an indoor navigation system: it can't tell you "the coordinates" of where you are, it can only detect in which room you're in.\nIf your location lack of nearby Wifi hotspots, an easy and cheap solution would be to spawn a bunch of ESP8266 / ESP32 boards around your house each acting as Access Point: with this simple trick you should be able to be as accurate as needed by just adding more boards.\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\n\nWith this in-depth tutorial I hope I helped you going from start to end of setting up a Wifi indoor positioning system using cheap hardware as ESP8266 / ESP32 boards and the Arduino IDE. \nAs you can see, Machine learning has not to be intimidating even for beginners: you just need the right tools to get the job done.\nIf this guide excited you about Machine learning on microcontrollers, I invite you to read the many other posts I wrote on the topic and share them on the socials.\nYou can find the whole project on Github. Don't forget to star the repo if you like it.\nL'articolo The Ultimate Guide to Wifi Indoor Positioning using Arduino and Machine Learning proviene da Eloquent Arduino Blog.", "date_published": "2020-08-08T15:21:25+02:00", "date_modified": "2020-08-09T16:19:32+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "Senza categoria" ] }, { "id": "https://eloquentarduino.github.io/?p=1225", "url": "https://eloquentarduino.github.io/2020/08/eloquentml-grows-its-family-of-classifiers-gaussian-naive-bayes-on-arduino/", "title": "EloquentML grows its family of classifiers: Gaussian Naive Bayes on Arduino", "content_html": "Are you looking for a top-performer classifiers with a minimal amount of parameters to tune? Look no further: Gaussian Naive Bayes is what you're looking for. And thanks to EloquentML you can now port it to your microcontroller.
\n\n\n
Naive Bayes classifiers are simple models based on the probability theory that can be used for classification.
\nThey originate from the assumption of independence among the input variables. Even though this assumption doesn't hold true in the vast majority of the cases, they often perform very good at many classification tasks, so they're quite popular.
\nGaussian Naive Bayes stack another (mostly wrong) assumption: that the variables exhibit a Gaussian probability distribution.
\nI (and many others like me) will never understand how it is possible that so many wrong assumptions lead to such good performances!
\nNevertheless, what is important to us is that sklearn implements GaussianNB, so we easily train such a classifier.
\nThe most interesting part is that GaussianNB
can be tuned with just a single parameter: var_smoothing
.
Don't ask me what it does in theory: in practice you change it and your accuracy can boost. This leads to an easy tuning process that doesn't involves expensive grid search.
\nimport sklearn.datasets as d\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import normalize\nfrom sklearn.naive_bayes import GaussianNB\n\ndef pick_best(X_train, X_test, y_train, y_test):\n best = (None, 0)\n for var_smoothing in range(-7, 1):\n clf = GaussianNB(var_smoothing=pow(10, var_smoothing))\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n accuracy = (y_pred == y_test).sum()\n if accuracy > best[1]:\n best = (clf, accuracy)\n print('best accuracy', best[1] / len(y_test))\n return best[0]\n\niris = d.load_iris()\nX = normalize(iris.data)\ny = iris.target\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\nclf = pick_best(X_train, X_test, y_train, y_test)
\nThis simple procedure will train a bunch of classifiers with a different var_smoothing
factor and pick the best performing one.
Once you have your trained classifier, porting it to C is as easy as always:
\nfrom micromlgen import port\n\nclf = pick_best()\nprint(port(clf))
\nAlways remember to run
\npip install --upgrade micromlgen
\n\nport
is a magic method able to port many classifiers: it will automatically detect the proper converter for you.
What does the exported code looks like?
\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class GaussianNB {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n float votes[3] = { 0.0f };\n float theta[4] = { 0 };\n float sigma[4] = { 0 };\n theta[0] = 0.801139789889; theta[1] = 0.54726920354; theta[2] = 0.234408773313; theta[3] = 0.039178084094;\n sigma[0] = 0.000366881742; sigma[1] = 0.000907992556; sigma[2] = 0.000740960787; sigma[3] = 0.000274925514;\n votes[0] = 0.333333333333 - gauss(x, theta, sigma);\n theta[0] = 0.748563871324; theta[1] = 0.349390892644; theta[2] = 0.536186138345; theta[3] = 0.166747384117;\n sigma[0] = 0.000529727082; sigma[1] = 0.000847956504; sigma[2] = 0.000690057342; sigma[3] = 0.000311828658;\n votes[1] = 0.333333333333 - gauss(x, theta, sigma);\n theta[0] = 0.704497203305; theta[1] = 0.318862439835; theta[2] = 0.593755956917; theta[3] = 0.217288784452;\n sigma[0] = 0.000363782089; sigma[1] = 0.000813846722; sigma[2] = 0.000415475678; sigma[3] = 0.000758478249;\n votes[2] = 0.333333333333 - gauss(x, theta, sigma);\n // return argmax of votes\n uint8_t classIdx = 0;\n float maxVotes = votes[0];\n\n for (uint8_t i = 1; i < 3; i++) {\n if (votes[i] > maxVotes) {\n classIdx = i;\n maxVotes = votes[i];\n }\n }\n\n return classIdx;\n }\n\n protected:\n /**\n * Compute gaussian value\n */\n float gauss(float *x, float *theta, float *sigma) {\n float gauss = 0.0f;\n\n for (uint16_t i = 0; i < 4; i++) {\n gauss += log(sigma[i]);\n gauss += pow(x[i] - theta[i], 2) / sigma[i];\n }\n\n return gauss;\n }\n };\n }\n }\n }
\n\r\nAs you can see, we need a couple of "weight vectors":
\ntheta
is the mean of each featuresigma
is the standard deviationThe computation is quite thin: just a couple of operations; the class with the highest score is then selected.
\nFollowing there's a recap of a couple benchmarks I run on an Arduino Nano 33 Ble Sense.
\nClassifier | \nDataset | \nFlash | \nRAM | \nExecution time | \nAccuracy | \n
---|---|---|---|---|---|
GaussianNB | \nIris (150x4) | \n82 kb | \n42 Kb | \n65 ms | \n97% | \n
LinearSVC | \nIris (150x4) | \n83 Kb | \n42 Kb | \n76 ms | \n99% | \n
GaussianNB | \nBreast cancer (80x40) | \n90 Kb | \n42 Kb | \n160 ms | \n77% | \n
LinearSVC | \nBreast cancer (80x40) | \n112 Kb | \n42 Kb | \n378 ms | \n73% | \n
GaussianNB | \nWine (100x13) | \n85 Kb | \n42 Kb | \n130 ms | \n97% | \n
LinearSVC | \nWine (100x13) | \n89 Kb | \n42 Kb | \n125 ms | \n99% | \n
We can see that the accuracy is on par with a linear SVM, reaching up to 97% on some datasets. Its semplicity shines with high-dimensional datasets (breast cancer) where execution time is half of the LinearSVC: I can see this pattern repeating with other real-world, medium-sized datasets.
\nThis is it, you can find the example project on Github.
\nL'articolo EloquentML grows its family of classifiers: Gaussian Naive Bayes on Arduino proviene da Eloquent Arduino Blog.
\n", "content_text": "Are you looking for a top-performer classifiers with a minimal amount of parameters to tune? Look no further: Gaussian Naive Bayes is what you're looking for. And thanks to EloquentML you can now port it to your microcontroller.\n\n\n(Gaussian) Naive Bayes\nNaive Bayes classifiers are simple models based on the probability theory that can be used for classification.\nThey originate from the assumption of independence among the input variables. Even though this assumption doesn't hold true in the vast majority of the cases, they often perform very good at many classification tasks, so they're quite popular.\nGaussian Naive Bayes stack another (mostly wrong) assumption: that the variables exhibit a Gaussian probability distribution.\nI (and many others like me) will never understand how it is possible that so many wrong assumptions lead to such good performances!\nNevertheless, what is important to us is that sklearn implements GaussianNB, so we easily train such a classifier.\nThe most interesting part is that GaussianNB can be tuned with just a single parameter: var_smoothing.\nDon't ask me what it does in theory: in practice you change it and your accuracy can boost. This leads to an easy tuning process that doesn't involves expensive grid search.\nimport sklearn.datasets as d\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import normalize\nfrom sklearn.naive_bayes import GaussianNB\n\ndef pick_best(X_train, X_test, y_train, y_test):\n best = (None, 0)\n for var_smoothing in range(-7, 1):\n clf = GaussianNB(var_smoothing=pow(10, var_smoothing))\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n accuracy = (y_pred == y_test).sum()\n if accuracy > best[1]:\n best = (clf, accuracy)\n print('best accuracy', best[1] / len(y_test))\n return best[0]\n\niris = d.load_iris()\nX = normalize(iris.data)\ny = iris.target\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\nclf = pick_best(X_train, X_test, y_train, y_test)\nThis simple procedure will train a bunch of classifiers with a different var_smoothing factor and pick the best performing one.\nEloquentML integration\nOnce you have your trained classifier, porting it to C is as easy as always:\nfrom micromlgen import port\n\nclf = pick_best()\nprint(port(clf))\nAlways remember to run \npip install --upgrade micromlgen\n\nport is a magic method able to port many classifiers: it will automatically detect the proper converter for you.\nWhat does the exported code looks like?\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class GaussianNB {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n float votes[3] = { 0.0f };\n float theta[4] = { 0 };\n float sigma[4] = { 0 };\n theta[0] = 0.801139789889; theta[1] = 0.54726920354; theta[2] = 0.234408773313; theta[3] = 0.039178084094;\n sigma[0] = 0.000366881742; sigma[1] = 0.000907992556; sigma[2] = 0.000740960787; sigma[3] = 0.000274925514;\n votes[0] = 0.333333333333 - gauss(x, theta, sigma);\n theta[0] = 0.748563871324; theta[1] = 0.349390892644; theta[2] = 0.536186138345; theta[3] = 0.166747384117;\n sigma[0] = 0.000529727082; sigma[1] = 0.000847956504; sigma[2] = 0.000690057342; sigma[3] = 0.000311828658;\n votes[1] = 0.333333333333 - gauss(x, theta, sigma);\n theta[0] = 0.704497203305; theta[1] = 0.318862439835; theta[2] = 0.593755956917; theta[3] = 0.217288784452;\n sigma[0] = 0.000363782089; sigma[1] = 0.000813846722; sigma[2] = 0.000415475678; sigma[3] = 0.000758478249;\n votes[2] = 0.333333333333 - gauss(x, theta, sigma);\n // return argmax of votes\n uint8_t classIdx = 0;\n float maxVotes = votes[0];\n\n for (uint8_t i = 1; i < 3; i++) {\n if (votes[i] > maxVotes) {\n classIdx = i;\n maxVotes = votes[i];\n }\n }\n\n return classIdx;\n }\n\n protected:\n /**\n * Compute gaussian value\n */\n float gauss(float *x, float *theta, float *sigma) {\n float gauss = 0.0f;\n\n for (uint16_t i = 0; i < 4; i++) {\n gauss += log(sigma[i]);\n gauss += pow(x[i] - theta[i], 2) / sigma[i];\n }\n\n return gauss;\n }\n };\n }\n }\n }\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nAs you can see, we need a couple of "weight vectors":\n\ntheta is the mean of each feature\nsigma is the standard deviation\n\nThe computation is quite thin: just a couple of operations; the class with the highest score is then selected.\nBenchmarks\nFollowing there's a recap of a couple benchmarks I run on an Arduino Nano 33 Ble Sense.\n\n\n\nClassifier\nDataset\nFlash\nRAM\nExecution time\nAccuracy\n\n\n\n\nGaussianNB\nIris (150x4)\n82 kb\n42 Kb\n65 ms\n97%\n\n\nLinearSVC\nIris (150x4)\n83 Kb\n42 Kb\n76 ms\n99%\n\n\nGaussianNB\nBreast cancer (80x40)\n90 Kb\n42 Kb\n160 ms\n77%\n\n\nLinearSVC\nBreast cancer (80x40)\n112 Kb\n42 Kb\n378 ms\n73%\n\n\nGaussianNB\nWine (100x13)\n85 Kb\n42 Kb\n130 ms\n97%\n\n\nLinearSVC\nWine (100x13)\n89 Kb\n42 Kb\n125 ms\n99%\n\n\n\nWe can see that the accuracy is on par with a linear SVM, reaching up to 97% on some datasets. Its semplicity shines with high-dimensional datasets (breast cancer) where execution time is half of the LinearSVC: I can see this pattern repeating with other real-world, medium-sized datasets.\n\nThis is it, you can find the example project on Github.\nL'articolo EloquentML grows its family of classifiers: Gaussian Naive Bayes on Arduino proviene da Eloquent Arduino Blog.", "date_published": "2020-08-02T10:44:36+02:00", "date_modified": "2020-08-02T11:36:42+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "microml", "ml", "Arduino Machine learning" ] }, { "id": "https://eloquentarduino.github.io/?p=1214", "url": "https://eloquentarduino.github.io/2020/07/sefr-a-fast-linear-time-classifier-for-ultra-low-power-devices/", "title": "SEFR: A Fast Linear-Time Classifier for Ultra-Low Power Devices", "content_html": "A brand new binary classifier that's tiny and accurate, perfect for embedded scenarios: easily achieve 90+ % accuracy with a minimal memory footprint!
\n\n\n
A few weeks ago I was wandering over arxiv.org looking for insipiration relative to Machine learning on microcontrollers when I found exactly what I was looking for.
\nSEFR: A Fast Linear-Time Classifier for Ultra-Low Power Devices is a paper from Hamidreza Keshavarz, Mohammad Saniee Abadeh, Reza Rawassizadeh where the authors develop a binary classifier that is:
\nIt has been specifically designed for embedded machine learning, so no optimization is required to run in on microcontrollers: it is tiny by design. In short, it uses a combination of the averages of the features as weights plus a bias to distinguish between positive and negative class. If you read the paper you will sure understand it: it's very straightforward.
\nThe authors both provided a C and Python implementation on Github you can read. I ported the C version "manually" to my Eloquent ML library and created a Python package called sefr copy-pasting from the original repo.
\nHere's a Python example.
\nfrom sefr import SEFR\nfrom sklearn.datasets import load_iris\nfrom sklearn.preprocessing import normalize\nfrom sklearn.model_selection import train_test_split\n\nif __name__ == '__main__':\n iris = load_iris()\n X = normalize(iris.data)\n y = iris.target\n X = X[y < 2]\n y = y[y < 2]\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n clf = SEFR()\n clf.fit(X_train, y_train)\n print('accuracy', (clf.predict(X_test) == y_test).sum() / len(y_test))
\nHow good is it?
\nDataset | \nNo. of features | \nAccuracy | \n
---|---|---|
Iris | \n4 | \n100% | \n
Breast cancer | \n30 | \n89% | \n
Wine | \n13 | \n84% | \n
Digits | \n64 | \n99% | \n
Considering that the model only needs 1 weight per feature, I think this results are impressive!
\nThe Python porting was done so I could integrate it easily in my micromlgen package.
\nHow to use it?
\nfrom sefr import SEFR\nfrom sklearn.datasets import load_iris\nfrom micromlgen import port\n\nif __name__ == '__main__':\n iris = load_iris()\n X = iris.data\n y = iris.target\n X = X[y < 2]\n y = y[y < 2]\n clf = SEFR()\n clf.fit(X_train, y_train)\n print(port(clf))
\nThe produced code is so compact I will report it here.
\n\r\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class SEFR {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n return dot(x, 0.084993602632 , -0.106163278477 , 0.488989863684 , 0.687022900763 ) <= 2.075 ? 0 : 1;\n }\n\n protected:\n /**\n * Compute dot product between features vector and classifier weights\n */\n float dot(float *x, ...) {\n va_list w;\n va_start(w, 4);\n float kernel = 0.0;\n\n for (uint16_t i = 0; i < 4; i++) {\n kernel += x[i] * va_arg(w, double);\n }\n\n return kernel;\n }\n };\n }\n }\n }
\nIn your sketch:
\n#include "IrisSEFR.h"\n#include "IrisTest.h"\n\nvoid setup() {\n Serial.begin(115200);\n}\n\nvoid loop() {\n Eloquent::ML::Port::SEFR clf;\n Eloquent::ML::Test::IrisTestSet testSet;\n\n testSet.test(clf);\n Serial.println(testSet.dump());\n delay(5000);\n}
\nYou have to clone the Github example to compile the code.
\nThat's all for today, I hope you will try this classifier and find a project it fits in: I'm very impressed by the easiness of implementation yet the accuracy it can achieve on benchmark datasets.
\nIn the next weeks I'm thinking in implementing a multi-class version of this and see how it performs, so stay tuned!
\nL'articolo SEFR: A Fast Linear-Time Classifier for Ultra-Low Power Devices proviene da Eloquent Arduino Blog.
\n", "content_text": "A brand new binary classifier that's tiny and accurate, perfect for embedded scenarios: easily achieve 90+ % accuracy with a minimal memory footprint!\n\n\nA few weeks ago I was wandering over arxiv.org looking for insipiration relative to Machine learning on microcontrollers when I found exactly what I was looking for.\nSEFR: A Fast Linear-Time Classifier for Ultra-Low Power Devices is a paper from Hamidreza Keshavarz, Mohammad Saniee Abadeh, Reza Rawassizadeh where the authors develop a binary classifier that is:\n\nfast during training\nfast during prediction\nrequires minimal memory\n\nIt has been specifically designed for embedded machine learning, so no optimization is required to run in on microcontrollers: it is tiny by design. In short, it uses a combination of the averages of the features as weights plus a bias to distinguish between positive and negative class. If you read the paper you will sure understand it: it's very straightforward.\nHow to use\nThe authors both provided a C and Python implementation on Github you can read. I ported the C version "manually" to my Eloquent ML library and created a Python package called sefr copy-pasting from the original repo.\nHere's a Python example.\nfrom sefr import SEFR\nfrom sklearn.datasets import load_iris\nfrom sklearn.preprocessing import normalize\nfrom sklearn.model_selection import train_test_split\n\nif __name__ == '__main__':\n iris = load_iris()\n X = normalize(iris.data)\n y = iris.target\n X = X[y < 2]\n y = y[y < 2]\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n clf = SEFR()\n clf.fit(X_train, y_train)\n print('accuracy', (clf.predict(X_test) == y_test).sum() / len(y_test))\nHow good is it?\n\n\n\nDataset\nNo. of features\nAccuracy\n\n\n\n\nIris\n4\n100%\n\n\nBreast cancer\n30\n89%\n\n\nWine\n13\n84%\n\n\nDigits\n64\n99%\n\n\n\nConsidering that the model only needs 1 weight per feature, I think this results are impressive!\nMicromlgen integration\nThe Python porting was done so I could integrate it easily in my micromlgen package.\nHow to use it?\nfrom sefr import SEFR\nfrom sklearn.datasets import load_iris\nfrom micromlgen import port\n\nif __name__ == '__main__':\n iris = load_iris()\n X = iris.data\n y = iris.target\n X = X[y < 2]\n y = y[y < 2]\n clf = SEFR()\n clf.fit(X_train, y_train)\n print(port(clf))\nThe produced code is so compact I will report it here.\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\n#pragma once\nnamespace Eloquent {\n namespace ML {\n namespace Port {\n class SEFR {\n public:\n /**\n * Predict class for features vector\n */\n int predict(float *x) {\n return dot(x, 0.084993602632 , -0.106163278477 , 0.488989863684 , 0.687022900763 ) <= 2.075 ? 0 : 1;\n }\n\n protected:\n /**\n * Compute dot product between features vector and classifier weights\n */\n float dot(float *x, ...) {\n va_list w;\n va_start(w, 4);\n float kernel = 0.0;\n\n for (uint16_t i = 0; i < 4; i++) {\n kernel += x[i] * va_arg(w, double);\n }\n\n return kernel;\n }\n };\n }\n }\n }\nIn your sketch:\n#include "IrisSEFR.h"\n#include "IrisTest.h"\n\nvoid setup() {\n Serial.begin(115200);\n}\n\nvoid loop() {\n Eloquent::ML::Port::SEFR clf;\n Eloquent::ML::Test::IrisTestSet testSet;\n\n testSet.test(clf);\n Serial.println(testSet.dump());\n delay(5000);\n}\nYou have to clone the Github example to compile the code.\n\nThat's all for today, I hope you will try this classifier and find a project it fits in: I'm very impressed by the easiness of implementation yet the accuracy it can achieve on benchmark datasets.\nIn the next weeks I'm thinking in implementing a multi-class version of this and see how it performs, so stay tuned!\nL'articolo SEFR: A Fast Linear-Time Classifier for Ultra-Low Power Devices proviene da Eloquent Arduino Blog.", "date_published": "2020-07-10T17:09:58+02:00", "date_modified": "2020-07-12T17:04:14+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "microml", "Arduino Machine learning" ] }, { "id": "https://eloquentarduino.github.io/?p=1203", "url": "https://eloquentarduino.github.io/2020/06/easy-esp32-camera-http-video-streaming-server/", "title": "Easy ESP32 camera HTTP video streaming server", "content_html": "This will be a short post where I introduce a new addition to the Arduino Eloquent library aimed to make video streaming from an ESP32 camera over HTTP super easy. It will be the first component of a larger project I'm going to implement.
\n\n
If you Google "esp32 video streaming" you will get a bunch of results that are essentialy copy-pasted from the official Espressif repo: many of them neither copy-pasted the code, just tell you to load the example sketch.
\nAnd if you try to read it and try to modify just a bit for your own use-case, you won't understand much.
\nThis is the exact environment for an Eloquent component to live: make it painfully easy what's messy.
\nI still have to find a good naming scheme for my libraries since Arduino IDE doesn't allow nested imports, so forgive me if "ESP32CameraHTTPVideoStreamingServer.h" was the best that came to mind.
\nHow easy is it to use?
\n1 line of code if used in conjuction with my other library EloquentVision.
\n#define CAMERA_MODEL_M5STACK_WIDE\n#include "WiFi.h"\n#include "EloquentVision.h"\n#include "ESP32CameraHTTPVideoStreamingServer.h"\n\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::Camera;\n\nESP32Camera camera;\nHTTPVideoStreamingServer server(81);\n\n/**\n *\n */\nvoid setup() {\n Serial.begin(115200);\n WiFi.softAP("ESP32", "12345678");\n\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_JPEG);\n server.start();\n\n Serial.print("Camera Ready! Use 'http://");\n Serial.print(WiFi.softAPIP());\n Serial.println(":81' to stream");\n}\n\nvoid loop() {\n}
\nHTTPVideoStreamingServer
assumes you already initialized your camera. You can achieve this task in the way you prefer: ESP32Camera
class makes this a breeze.
81
in the server constructor is the port you want the server to be listening to.
Once connected to WiFi or started in AP mode, all you have to do is call start()
: that's it!
What else is it good for?
\nThe main reason I wrote this piece of library is because one of you reader commented on the motion detection post asking if it would be possible to start the video streaming once motion is detected.
\nOf course it is.
\nIt's just a matter of composing the Eloquent pieces.
\n// not workings AS-IS, needs refactoring\n\n#define CAMERA_MODEL_M5STACK_WIDE\n#include "WiFi.h"\n#include "EloquentVision.h"\n#include "ESP32CameraHTTPVideoStreamingServer.h"\n\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define CHANNELS 1\n#define DEST_WIDTH 32\n#define DEST_HEIGHT 24\n#define BLOCK_VARIATION_THRESHOLD 0.3\n#define MOTION_THRESHOLD 0.2\n\n// we're using the Eloquent::Vision namespace a lot!\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::Camera;\nusing namespace Eloquent::Vision::ImageProcessing;\nusing namespace Eloquent::Vision::ImageProcessing::Downscale;\nusing namespace Eloquent::Vision::ImageProcessing::DownscaleStrategies;\n\nESP32Camera camera;\nHTTPVideoStreamingServer server(81);\n// the buffer to store the downscaled version of the image\nuint8_t resized[DEST_HEIGHT][DEST_WIDTH];\n// the downscaler algorithm\n// for more details see https://eloquentarduino.github.io/2020/05/easier-faster-pure-video-esp32-cam-motion-detection\nCross<SOURCE_WIDTH, SOURCE_HEIGHT, DEST_WIDTH, DEST_HEIGHT> crossStrategy;\n// the downscaler container\nDownscaler<SOURCE_WIDTH, SOURCE_HEIGHT, CHANNELS, DEST_WIDTH, DEST_HEIGHT> downscaler(&crossStrategy);\n// the motion detection algorithm\nMotionDetection<DEST_WIDTH, DEST_HEIGHT> motion;\n\n/**\n *\n */\nvoid setup() {\n Serial.begin(115200);\n WiFi.softAP("ESP32", "12345678");\n\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_GRAYSCALE);\n motion.setBlockVariationThreshold(BLOCK_VARIATION_THRESHOLD);\n\n Serial.print("Camera Ready! Use 'http://");\n Serial.print(WiFi.softAPIP());\n Serial.println(":81' to stream");\n}\n\nvoid loop() {\n camera_fb_t *frame = camera.capture();\n\n // resize image and detect motion\n downscaler.downscale(frame->buf, resized);\n motion.update(resized);\n motion.detect();\n\n if (motion.ratio() > MOTION_THRESHOLD) {\n Serial.print("Motion detected");\n // start the streaming server when motion is detected\n // shutdown after 20 seconds if no one connects\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_JPEG);\n delay(2000);\n Serial.print("Camera Server ready! Use 'http://");\n Serial.print(WiFi.softAPIP());\n Serial.println(":81' to stream");\n server.start();\n delay(20000);\n server.stop();\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_GRAYSCALE);\n delay(2000);\n }\n\n // probably we don't need 30 fps, save some power\n delay(300);\n}
\nDoes it look good?
\nNow the rationale behind Eloquent components should be starting to be clear to you: easy to use objects you can compose the way it fits to achieve the result you want.
\nWould you suggest me more piece of functionality you would like to see wrapped in an Eloquent component?
\nYou can find the class code and the example sketch on the Github repo.
\nL'articolo Easy ESP32 camera HTTP video streaming server proviene da Eloquent Arduino Blog.
\n", "content_text": "This will be a short post where I introduce a new addition to the Arduino Eloquent library aimed to make video streaming from an ESP32 camera over HTTP super easy. It will be the first component of a larger project I'm going to implement.\n\nIf you Google "esp32 video streaming" you will get a bunch of results that are essentialy copy-pasted from the official Espressif repo: many of them neither copy-pasted the code, just tell you to load the example sketch.\nAnd if you try to read it and try to modify just a bit for your own use-case, you won't understand much.\nThis is the exact environment for an Eloquent component to live: make it painfully easy what's messy.\nI still have to find a good naming scheme for my libraries since Arduino IDE doesn't allow nested imports, so forgive me if "ESP32CameraHTTPVideoStreamingServer.h" was the best that came to mind.\nHow easy is it to use?\n1 line of code if used in conjuction with my other library EloquentVision.\n#define CAMERA_MODEL_M5STACK_WIDE\n#include "WiFi.h"\n#include "EloquentVision.h"\n#include "ESP32CameraHTTPVideoStreamingServer.h"\n\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::Camera;\n\nESP32Camera camera;\nHTTPVideoStreamingServer server(81);\n\n/**\n *\n */\nvoid setup() {\n Serial.begin(115200);\n WiFi.softAP("ESP32", "12345678");\n\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_JPEG);\n server.start();\n\n Serial.print("Camera Ready! Use 'http://");\n Serial.print(WiFi.softAPIP());\n Serial.println(":81' to stream");\n}\n\nvoid loop() {\n}\nHTTPVideoStreamingServer assumes you already initialized your camera. You can achieve this task in the way you prefer: ESP32Camera class makes this a breeze.\n81 in the server constructor is the port you want the server to be listening to.\nOnce connected to WiFi or started in AP mode, all you have to do is call start(): that's it!\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nWhat else is it good for?\nThe main reason I wrote this piece of library is because one of you reader commented on the motion detection post asking if it would be possible to start the video streaming once motion is detected.\nOf course it is.\nIt's just a matter of composing the Eloquent pieces.\n// not workings AS-IS, needs refactoring\n\n#define CAMERA_MODEL_M5STACK_WIDE\n#include "WiFi.h"\n#include "EloquentVision.h"\n#include "ESP32CameraHTTPVideoStreamingServer.h"\n\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define CHANNELS 1\n#define DEST_WIDTH 32\n#define DEST_HEIGHT 24\n#define BLOCK_VARIATION_THRESHOLD 0.3\n#define MOTION_THRESHOLD 0.2\n\n// we're using the Eloquent::Vision namespace a lot!\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::Camera;\nusing namespace Eloquent::Vision::ImageProcessing;\nusing namespace Eloquent::Vision::ImageProcessing::Downscale;\nusing namespace Eloquent::Vision::ImageProcessing::DownscaleStrategies;\n\nESP32Camera camera;\nHTTPVideoStreamingServer server(81);\n// the buffer to store the downscaled version of the image\nuint8_t resized[DEST_HEIGHT][DEST_WIDTH];\n// the downscaler algorithm\n// for more details see https://eloquentarduino.github.io/2020/05/easier-faster-pure-video-esp32-cam-motion-detection\nCross<SOURCE_WIDTH, SOURCE_HEIGHT, DEST_WIDTH, DEST_HEIGHT> crossStrategy;\n// the downscaler container\nDownscaler<SOURCE_WIDTH, SOURCE_HEIGHT, CHANNELS, DEST_WIDTH, DEST_HEIGHT> downscaler(&crossStrategy);\n// the motion detection algorithm\nMotionDetection<DEST_WIDTH, DEST_HEIGHT> motion;\n\n/**\n *\n */\nvoid setup() {\n Serial.begin(115200);\n WiFi.softAP("ESP32", "12345678");\n\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_GRAYSCALE);\n motion.setBlockVariationThreshold(BLOCK_VARIATION_THRESHOLD);\n\n Serial.print("Camera Ready! Use 'http://");\n Serial.print(WiFi.softAPIP());\n Serial.println(":81' to stream");\n}\n\nvoid loop() {\n camera_fb_t *frame = camera.capture();\n\n // resize image and detect motion\n downscaler.downscale(frame->buf, resized);\n motion.update(resized);\n motion.detect();\n\n if (motion.ratio() > MOTION_THRESHOLD) {\n Serial.print("Motion detected");\n // start the streaming server when motion is detected\n // shutdown after 20 seconds if no one connects\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_JPEG);\n delay(2000);\n Serial.print("Camera Server ready! Use 'http://");\n Serial.print(WiFi.softAPIP());\n Serial.println(":81' to stream");\n server.start();\n delay(20000);\n server.stop();\n camera.begin(FRAMESIZE_QVGA, PIXFORMAT_GRAYSCALE);\n delay(2000);\n }\n\n // probably we don't need 30 fps, save some power\n delay(300);\n}\nDoes it look good?\nNow the rationale behind Eloquent components should be starting to be clear to you: easy to use objects you can compose the way it fits to achieve the result you want.\nWould you suggest me more piece of functionality you would like to see wrapped in an Eloquent component?\n\nYou can find the class code and the example sketch on the Github repo.\nL'articolo Easy ESP32 camera HTTP video streaming server proviene da Eloquent Arduino Blog.", "date_published": "2020-06-24T19:27:33+02:00", "date_modified": "2020-12-16T21:29:52+01:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "camera", "esp32", "Eloquent library" ] } ] }