{ "version": "https://jsonfeed.org/version/1.1", "user_comment": "This feed allows you to read the posts from this site in any feed reader that supports the JSON Feed format. To add this feed to your reader, copy the following URL -- https://eloquentarduino.github.io/category/programming/computer-vision/feed/json/ -- and add it your reader.", "home_page_url": "https://eloquentarduino.github.io/category/programming/computer-vision/", "feed_url": "https://eloquentarduino.github.io/category/programming/computer-vision/feed/json/", "language": "en-US", "title": "Computer vision – Eloquent Arduino Blog", "description": "Machine learning on Arduino, programming & electronics", "items": [ { "id": "https://eloquentarduino.github.io/?p=1390", "url": "https://eloquentarduino.github.io/2020/12/esp32-cam-motion-detection-with-photo-capture-grayscale-version/", "title": "Esp32-cam motion detection WITH PHOTO CAPTURE! (grayscale version)", "content_html": "

Do you want to transform your cheap esp32-cam in a DIY surveillance camera with moton detection AND photo capture?

\n

Look no further: this post explains STEP-BY-STEP all you need to know to build one yourself!

\n

\"Esp32-cam

\n

\n

As I told you in the Easier, faster pure video Esp32-cam motion detection post, motion detection on the esp32-cam seems to be the hottest topic on my blog, so I thought it deserved some more tutorials.

\n

Without question, to #1 request you made me in the comments was

\n
\n

How can I save the image that triggered the motion detection to the disk?

\n
\n

Well, in this post I will show you how to save the image to the SPIFFS filesystem your esp32-cam comes equipped with!

\n

Motion detection, refactored

\n

Please read the post on easier, faster esp32-cam motion detection first if you want to understand the following code.

\n

It took me quite some time to write this post because I was struggling to design a clear, easy to use API for the motion detection feature and the image storage.

\n

And I have to admit that, even after so long, I'm still not satisfied with the results.

\n

Nonetheless, it works, and it works well in my opinion, so I will publish this and maybe get feedback from you to help me improve (so please leave a comment if you have any suggestion).

\n

I won't bother you with the design considerations I took since this is an hands-on tutorial, so let's take a look at the code to implement motion detection on the esp32-cam or any other esp32 with a camera attached (I'm using the M5Stick camera).

\n

First of all, you need the EloquentVision library: you can install it either from Github or using the Arduino IDE's Library Manager.

\n

Next, the code.

\n
// Change according to your model\n// The models available are\n//   - CAMERA_MODEL_WROVER_KIT\n//   - CAMERA_MODEL_ESP_EYE\n//   - CAMERA_MODEL_M5STACK_PSRAM\n//   - CAMERA_MODEL_M5STACK_WIDE\n//   - CAMERA_MODEL_AI_THINKER\n#define CAMERA_MODEL_M5STACK_WIDE\n\n#include <FS.h>\n#include <SPIFFS.h>\n#include "EloquentVision.h"\n\n// set the resolution of the source image and the resolution of the downscaled image for the motion detection\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define CHANNELS 1\n#define DEST_WIDTH 32\n#define DEST_HEIGHT 24\n#define BLOCK_VARIATION_THRESHOLD 0.3\n#define MOTION_THRESHOLD 0.2\n\n// we're using the Eloquent::Vision namespace a lot!\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::IO;\nusing namespace Eloquent::Vision::ImageProcessing;\nusing namespace Eloquent::Vision::ImageProcessing::Downscale;\nusing namespace Eloquent::Vision::ImageProcessing::DownscaleStrategies;\n\n// an easy interface to capture images from the camera\nESP32Camera camera;\n// the buffer to store the downscaled version of the image\nuint8_t resized[DEST_HEIGHT][DEST_WIDTH];\n// the downscaler algorithm\n// for more details see https://eloquentarduino.github.io/2020/05/easier-faster-pure-video-esp32-cam-motion-detection\nCross<SOURCE_WIDTH, SOURCE_HEIGHT, DEST_WIDTH, DEST_HEIGHT> crossStrategy;\n// the downscaler container\nDownscaler<SOURCE_WIDTH, SOURCE_HEIGHT, CHANNELS, DEST_WIDTH, DEST_HEIGHT> downscaler(&crossStrategy);\n// the motion detection algorithm\nMotionDetection<DEST_WIDTH, DEST_HEIGHT> motion;\n\nvoid setup() {\n    Serial.begin(115200);\n    SPIFFS.begin(true);\n    camera.begin(FRAME_SIZE, PIXFORMAT_GRAYSCALE);\n    motion.setBlockVariationThreshold(BLOCK_VARIATION_THRESHOLD);\n}\n\nvoid loop() {\n    camera_fb_t *frame = camera.capture();\n\n    // resize image and detect motion\n    downscaler.downscale(frame->buf, resized);\n    motion.update(resized);\n    motion.detect();\n\n    if (motion.ratio() > MOTION_THRESHOLD) {\n        Serial.println("Motion detected");\n\n        // here we want to save the image to disk\n    }\n}
\n

Save image to disk

\n

Fine, we can detect motion!

\n

Now we want to save the triggering image to disk in a format that we can decode without any custom software. It would be cool if we could see the image using the native Esp32 Filesystem Browser sketch.

\n

Thankfully to the guys at espressif, the esp32 is able to encode a raw image to JPEG format: it is convenient to use (any PC on earth can read a jpeg) and it is also fast.

\n

and thanks to the reader ankaiser for pointing it out

\n

It's really easy to do thanks to the EloquentVision library.

\n
if (motion.ratio() > MOTION_THRESHOLD) {\n        Serial.println("Motion detected");\n\n        // quality ranges from 10 to 64 -> the higher, the more detailed\n        uint8_t quality = 30;\n        JpegWriter<SOURCE_WIDTH, SOURCE_HEIGHT> jpegWriter;\n        File imageFile = SPIFFS.open("/capture.jpg", "wb");\n\n        // it takes < 1 second for a 320x240 image and 4 Kb of space\n        jpegWriter.writeGrayscale(imageFile, frame->buf, quality);\n        imageFile.close();\n}
\n

Well done! Now your image is on the disk and can be downloaded with the FSBrowser sketch.

\n

Now you have all the tools you need to create your own DIY surveillance camera with motion detection feature!

\n

You can use it to catch thieves (I discourage you to rely on such a rudimentary setup however!), to capture images of wild animals in your garden (birds, sqirrels or the like), or any other application you see fit.

\n

Further improvements

\n

Of course you may well understand that a proper motion detection setup should be more complex than the one presented here. Nevertheless, a couple of quick fixes can greatly improve the usability of this project with little effort. Here I suggest you a couple.

\n

#1: Debouncing successive frames: the code presented in this post is a stripped down version of a more complete esp32-cam motion detection example sketch.

\n

That sketch implements a debouncing function to prevent writing "ghost images" (see the original post on motion detection for a clear evidence of this effect).

\n

#2: Proper file naming: the example sketch uses a fixed filename for the image. This means any new image will overwrite the older, which may be undesiderable based on your requirements. A proper way to handle this would be to attach an RTC and name the image after the time it occurred (something like "motion_2020-12-03_08:09:10.bmp")

\n

#3: RGB images: this is something I'm working on. I mean, the Bitmap writer is there (so you could actually use it to store images on your esp32), but the multi-channel motion detection is driving me crazy, I need some more time to design it the way I want, so stay tuned!

\n
\n

I hope you enjoyed this tutorial on esp32-cam motion detection with photo capture: it was born as a response to your asking, so don't be afraid and ask me anything: I will do my best to help you!

\n

L'articolo Esp32-cam motion detection WITH PHOTO CAPTURE! (grayscale version) proviene da Eloquent Arduino Blog.

\n", "content_text": "Do you want to transform your cheap esp32-cam in a DIY surveillance camera with moton detection AND photo capture?\nLook no further: this post explains STEP-BY-STEP all you need to know to build one yourself!\n\n\nAs I told you in the Easier, faster pure video Esp32-cam motion detection post, motion detection on the esp32-cam seems to be the hottest topic on my blog, so I thought it deserved some more tutorials.\nWithout question, to #1 request you made me in the comments was\n\nHow can I save the image that triggered the motion detection to the disk?\n\nWell, in this post I will show you how to save the image to the SPIFFS filesystem your esp32-cam comes equipped with!\nMotion detection, refactored\nPlease read the post on easier, faster esp32-cam motion detection first if you want to understand the following code.\nIt took me quite some time to write this post because I was struggling to design a clear, easy to use API for the motion detection feature and the image storage.\nAnd I have to admit that, even after so long, I'm still not satisfied with the results.\nNonetheless, it works, and it works well in my opinion, so I will publish this and maybe get feedback from you to help me improve (so please leave a comment if you have any suggestion).\nI won't bother you with the design considerations I took since this is an hands-on tutorial, so let's take a look at the code to implement motion detection on the esp32-cam or any other esp32 with a camera attached (I'm using the M5Stick camera).\nFirst of all, you need the EloquentVision library: you can install it either from Github or using the Arduino IDE's Library Manager.\nNext, the code.\n// Change according to your model\n// The models available are\n// - CAMERA_MODEL_WROVER_KIT\n// - CAMERA_MODEL_ESP_EYE\n// - CAMERA_MODEL_M5STACK_PSRAM\n// - CAMERA_MODEL_M5STACK_WIDE\n// - CAMERA_MODEL_AI_THINKER\n#define CAMERA_MODEL_M5STACK_WIDE\n\n#include <FS.h>\n#include <SPIFFS.h>\n#include "EloquentVision.h"\n\n// set the resolution of the source image and the resolution of the downscaled image for the motion detection\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define CHANNELS 1\n#define DEST_WIDTH 32\n#define DEST_HEIGHT 24\n#define BLOCK_VARIATION_THRESHOLD 0.3\n#define MOTION_THRESHOLD 0.2\n\n// we're using the Eloquent::Vision namespace a lot!\nusing namespace Eloquent::Vision;\nusing namespace Eloquent::Vision::IO;\nusing namespace Eloquent::Vision::ImageProcessing;\nusing namespace Eloquent::Vision::ImageProcessing::Downscale;\nusing namespace Eloquent::Vision::ImageProcessing::DownscaleStrategies;\n\n// an easy interface to capture images from the camera\nESP32Camera camera;\n// the buffer to store the downscaled version of the image\nuint8_t resized[DEST_HEIGHT][DEST_WIDTH];\n// the downscaler algorithm\n// for more details see https://eloquentarduino.github.io/2020/05/easier-faster-pure-video-esp32-cam-motion-detection\nCross<SOURCE_WIDTH, SOURCE_HEIGHT, DEST_WIDTH, DEST_HEIGHT> crossStrategy;\n// the downscaler container\nDownscaler<SOURCE_WIDTH, SOURCE_HEIGHT, CHANNELS, DEST_WIDTH, DEST_HEIGHT> downscaler(&crossStrategy);\n// the motion detection algorithm\nMotionDetection<DEST_WIDTH, DEST_HEIGHT> motion;\n\nvoid setup() {\n Serial.begin(115200);\n SPIFFS.begin(true);\n camera.begin(FRAME_SIZE, PIXFORMAT_GRAYSCALE);\n motion.setBlockVariationThreshold(BLOCK_VARIATION_THRESHOLD);\n}\n\nvoid loop() {\n camera_fb_t *frame = camera.capture();\n\n // resize image and detect motion\n downscaler.downscale(frame->buf, resized);\n motion.update(resized);\n motion.detect();\n\n if (motion.ratio() > MOTION_THRESHOLD) {\n Serial.println("Motion detected");\n\n // here we want to save the image to disk\n }\n}\nSave image to disk\nFine, we can detect motion!\nNow we want to save the triggering image to disk in a format that we can decode without any custom software. It would be cool if we could see the image using the native Esp32 Filesystem Browser sketch.\nThankfully to the guys at espressif, the esp32 is able to encode a raw image to JPEG format: it is convenient to use (any PC on earth can read a jpeg) and it is also fast.\nand thanks to the reader ankaiser for pointing it out\nIt's really easy to do thanks to the EloquentVision library.\nif (motion.ratio() > MOTION_THRESHOLD) {\n Serial.println("Motion detected");\n\n // quality ranges from 10 to 64 -> the higher, the more detailed\n uint8_t quality = 30;\n JpegWriter<SOURCE_WIDTH, SOURCE_HEIGHT> jpegWriter;\n File imageFile = SPIFFS.open("/capture.jpg", "wb");\n\n // it takes < 1 second for a 320x240 image and 4 Kb of space\n jpegWriter.writeGrayscale(imageFile, frame->buf, quality);\n imageFile.close();\n}\nWell done! Now your image is on the disk and can be downloaded with the FSBrowser sketch.\nNow you have all the tools you need to create your own DIY surveillance camera with motion detection feature!\nYou can use it to catch thieves (I discourage you to rely on such a rudimentary setup however!), to capture images of wild animals in your garden (birds, sqirrels or the like), or any other application you see fit.\nFurther improvements\nOf course you may well understand that a proper motion detection setup should be more complex than the one presented here. Nevertheless, a couple of quick fixes can greatly improve the usability of this project with little effort. Here I suggest you a couple.\n#1: Debouncing successive frames: the code presented in this post is a stripped down version of a more complete esp32-cam motion detection example sketch.\nThat sketch implements a debouncing function to prevent writing "ghost images" (see the original post on motion detection for a clear evidence of this effect).\n#2: Proper file naming: the example sketch uses a fixed filename for the image. This means any new image will overwrite the older, which may be undesiderable based on your requirements. A proper way to handle this would be to attach an RTC and name the image after the time it occurred (something like "motion_2020-12-03_08:09:10.bmp")\n#3: RGB images: this is something I'm working on. I mean, the Bitmap writer is there (so you could actually use it to store images on your esp32), but the multi-channel motion detection is driving me crazy, I need some more time to design it the way I want, so stay tuned!\n\nI hope you enjoyed this tutorial on esp32-cam motion detection with photo capture: it was born as a response to your asking, so don't be afraid and ask me anything: I will do my best to help you!\nL'articolo Esp32-cam motion detection WITH PHOTO CAPTURE! (grayscale version) proviene da Eloquent Arduino Blog.", "date_published": "2020-12-03T18:50:59+01:00", "date_modified": "2020-12-06T09:31:20+01:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "Computer vision", "Eloquent library" ] }, { "id": "https://eloquentarduino.github.io/?p=1110", "url": "https://eloquentarduino.com/projects/esp32-arduino-motion-detection", "title": "Easier, faster pure video ESP32 cam motion detection", "content_html": "

If you liked my post about ESP32 cam motion detection, you'll love this updated version: it's easier to use and blazing fast!

\n

\"Faster

\n

\n

The post about pure video ESP32 cam motion detection without an external PIR is my most successful post at the moment. Many of you are interested about this topic.

\n

One of my readers, though, pointed out my implementation was quite slow and he only achieved bare 5 fps in his project. So he asked for a better alternative.

\n

Since the post was of great interest for many people, I took the time to revisit the code and make improvements.

\n

I came up with a 100% re-writing that is both easier to use and faster. Actually, it is blazing fast!.

\n

Let's see how it works.

\n

Table of contents
  1. Downsampling
    1. Nearest neighbor
    2. Full block average
    3. Core block average
    4. Cross block average
    5. Diagonal block average
    6. Implement your own
  2. Benchmarks
  3. Motion detection
  4. Full code

\n

Downsampling

\n

In the original post I introduced the idea of downsampling the image from the camera for a faster and more robust motion detection. I wrote the code in the main sketch to keep it self-contained.

\n

Looking back now it was a poor choice, since it cluttered the project and distracted from the main purpose, which is motion detection.

\n

Moreover, I thought that scanning the image buffer in sequential order would be the fastest approach.

\n

It turns out I was wrong.

\n

This time I scan the image buffer following the blocks that will compose the resulting image and the results are... much faster.

\n

Also, I decided to inject some more efficiency that will further speedup the computation: using different strategies for downsampling.

\n

The idea of downsampling is that you have to "collapse" a block of NxN from the original image to just one pixel of the resulting image.

\n

Now, there are a variety of ways you can accomplish this. The first two I present here are the most obvious, the other two are of my "invention": nothing fancy nor new, but they're fast and serve the purpose well.

\n

Nearest neighbor

\n

You can just pick the center of the NxN block and use its value for the output.
\nOf course it is fast (possibly the fastest approach), but wouldn't be very accurate. One pixel out of NxN wouldn't be representative of the overall region and will heavily suffer from noise.

\n

\"Nearest

\n

\"Nearest

\n

Full block average

\n

This is the most intuitive alternative: use the average of all the pixels in the block as the ouput value. This is arguabily the "proper" way to do it, since you're using all the pixels in the source image to compute the new one.

\n

\"Full
\n\"Full

\n

Core block average

\n

As a faster alternative, I thought that averaging only the "core" (the most internal part) of the block would have been a good-enough solution. It has no theoretical proof that this yields true, but our task here is to create a smaller representation of the original image, not producing an accurate smaller version.

\n

\"Core
\n\"Core

\n

I'll stress this point: the only reason we do downsampling is to compare two sequential frame and detect if they differ above a certain threshold. This downsampling doesn't have to mimic the actual image: it can transform the source in any fancy way, as long as it stays consistent and captures the variations over time.

\n

Cross block average

\n

This time we consider all the pixels along the vertical and horizontal central axes. The idea is that you will capture a good portion of the variation along both the axis, given quite accurate results.

\n

\"Cross
\n\"Cross

\n

Diagonal block average

\n

This alternative too came to my mind from nowhere, really. I just think it is a good alternative to capture all the block's variation, probably even better than vertical and horizontal directions.

\n

\"Diagonal
\n\"Diagonal

\n

Implement your own

\n

Not satisfied from the methods above? No problem, you can still implement your own.

\n

The ones presented above are just some algorithms that came to my mind: I'm not telling you they're the best.

\n

They worked for me, that's it.

\n

If you think you found a better solution, I encourage you implement it and even share it with me and the other readers, so we can all make progress on this together.

\n\r\n
\r\n
\r\n
\r\n\t

Finding this content useful?

\r\n
\r\n\t\r\n
\r\n\t
\r\n\t\t
\r\n\t\t
\r\n\t
\r\n
\r\n
\r\n
\r\n
\r\n
\r\n\r\n\n

Benchmarks

\n

So, at the very beginning I said this new implementation is blazingly fast.

\n

How much fast?

\n

As fast as it can be, arguably.

\n

I mean, so fast it won't alter your fps.

\n

Look at the results I got on my M5Stack camera.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AlgorithmTime to execute (micros)FPS
None025
Nearest neighbor16025
Cross block70025
Core block80025
Diagonal block95025
Full block490012
\n

As you can see, only the full block creates a delay in the process (quite a bit of delay even): the other methods won't slow down your program in any noticeable way.

\n

If you test Nearest neighbor and it works for you, then you'll be extremely light on computation resources with only 160 microseconds of delay.

\n

This is what I mean by blazing fast.

\n

Motion detection

\n

The motion detection part hasn't changed, so I point you to the original post to read more about the Block difference threshold and the Image difference threshold.

\n

Full code

\n
#define CAMERA_MODEL_M5STACK_WIDE\n#include "EloquentVision.h"\n\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define BLOCK_SIZE 10\n#define DEST_WIDTH (SOURCE_WIDTH / BLOCK_SIZE)\n#define DEST_HEIGHT (SOURCE_HEIGHT / BLOCK_SIZE)\n#define BLOCK_DIFF_THRESHOLD 0.2\n#define IMAGE_DIFF_THRESHOLD 0.1\n#define DEBUG 0\n\nusing namespace Eloquent::Vision;\n\nESP32Camera camera;\nuint8_t prevFrame[DEST_WIDTH * DEST_HEIGHT] = { 0 };\nuint8_t currentFrame[DEST_WIDTH * DEST_HEIGHT] = { 0 };\n\n// function prototypes\nbool motionDetect();\nvoid updateFrame();\n\n/**\n *\n */\nvoid setup() {\n    Serial.begin(115200);\n    camera.begin(FRAME_SIZE, PIXFORMAT_GRAYSCALE);\n}\n\n/**\n *\n */\nvoid loop() {\n    /**\n     * Algorithm:\n     *  1. grab frame\n     *  2. compare with previous to detect motion\n     *  3. update previous frame\n     */\n\n    time_t start = millis();\n    camera_fb_t *frame = camera.capture();\n\n    downscaleImage(frame->buf, currentFrame, nearest, SOURCE_WIDTH, SOURCE_HEIGHT, BLOCK_SIZE);\n\n    if (motionDetect()) {\n        Serial.print("Motion detected @ ");\n        Serial.print(floor(1000.0f / (millis() - start)));\n        Serial.println(" FPS");\n    }\n\n    updateFrame();\n}\n\n/**\n * Compute the number of different blocks\n * If there are enough, then motion happened\n */\nbool motionDetect() {\n    uint16_t changes = 0;\n    const uint16_t blocks = DEST_WIDTH * DEST_HEIGHT;\n\n    for (int y = 0; y < DEST_HEIGHT; y++) {\n        for (int x = 0; x < DEST_WIDTH; x++) {\n            float current = currentFrame[y * DEST_WIDTH + x];\n            float prev = prevFrame[y * DEST_WIDTH + x];\n            float delta = abs(current - prev) / prev;\n\n            if (delta >= BLOCK_DIFF_THRESHOLD)\n                changes += 1;\n        }\n    }\n\n    return (1.0 * changes / blocks) > IMAGE_DIFF_THRESHOLD;\n}\n\n/**\n * Copy current frame to previous\n */\nvoid updateFrame() {\n    memcpy(prevFrame, currentFrame, DEST_WIDTH * DEST_HEIGHT);\n}
\n
\n

Check the full project code on Github and remember to star!

\n\r\n
\r\n
\r\n
\r\n\t

Finding this content useful?

\r\n
\r\n\t\r\n
\r\n\t
\r\n\t\t
\r\n\t\t
\r\n\t
\r\n
\r\n
\r\n
\r\n
\r\n
\r\n\r\n\n

L'articolo Easier, faster pure video ESP32 cam motion detection proviene da Eloquent Arduino Blog.

\n", "content_text": "If you liked my post about ESP32 cam motion detection, you'll love this updated version: it's easier to use and blazing fast!\n\n\nThe post about pure video ESP32 cam motion detection without an external PIR is my most successful post at the moment. Many of you are interested about this topic.\nOne of my readers, though, pointed out my implementation was quite slow and he only achieved bare 5 fps in his project. So he asked for a better alternative.\nSince the post was of great interest for many people, I took the time to revisit the code and make improvements.\nI came up with a 100% re-writing that is both easier to use and faster. Actually, it is blazing fast!.\nLet's see how it works.\nTable of contentsDownsamplingNearest neighborFull block averageCore block averageCross block averageDiagonal block averageImplement your ownBenchmarksMotion detectionFull code\nDownsampling\nIn the original post I introduced the idea of downsampling the image from the camera for a faster and more robust motion detection. I wrote the code in the main sketch to keep it self-contained.\nLooking back now it was a poor choice, since it cluttered the project and distracted from the main purpose, which is motion detection.\nMoreover, I thought that scanning the image buffer in sequential order would be the fastest approach.\nIt turns out I was wrong.\nThis time I scan the image buffer following the blocks that will compose the resulting image and the results are... much faster.\nAlso, I decided to inject some more efficiency that will further speedup the computation: using different strategies for downsampling.\nThe idea of downsampling is that you have to "collapse" a block of NxN from the original image to just one pixel of the resulting image.\nNow, there are a variety of ways you can accomplish this. The first two I present here are the most obvious, the other two are of my "invention": nothing fancy nor new, but they're fast and serve the purpose well.\nNearest neighbor\nYou can just pick the center of the NxN block and use its value for the output.\nOf course it is fast (possibly the fastest approach), but wouldn't be very accurate. One pixel out of NxN wouldn't be representative of the overall region and will heavily suffer from noise.\n\n\nFull block average\nThis is the most intuitive alternative: use the average of all the pixels in the block as the ouput value. This is arguabily the "proper" way to do it, since you're using all the pixels in the source image to compute the new one.\n\n\nCore block average\nAs a faster alternative, I thought that averaging only the "core" (the most internal part) of the block would have been a good-enough solution. It has no theoretical proof that this yields true, but our task here is to create a smaller representation of the original image, not producing an accurate smaller version.\n\n\nI'll stress this point: the only reason we do downsampling is to compare two sequential frame and detect if they differ above a certain threshold. This downsampling doesn't have to mimic the actual image: it can transform the source in any fancy way, as long as it stays consistent and captures the variations over time.\nCross block average\nThis time we consider all the pixels along the vertical and horizontal central axes. The idea is that you will capture a good portion of the variation along both the axis, given quite accurate results.\n\n\nDiagonal block average\nThis alternative too came to my mind from nowhere, really. I just think it is a good alternative to capture all the block's variation, probably even better than vertical and horizontal directions.\n\n\nImplement your own\nNot satisfied from the methods above? No problem, you can still implement your own.\nThe ones presented above are just some algorithms that came to my mind: I'm not telling you they're the best.\nThey worked for me, that's it.\nIf you think you found a better solution, I encourage you implement it and even share it with me and the other readers, so we can all make progress on this together.\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nBenchmarks\nSo, at the very beginning I said this new implementation is blazingly fast. \nHow much fast?\nAs fast as it can be, arguably.\nI mean, so fast it won't alter your fps.\nLook at the results I got on my M5Stack camera.\n\n\n\nAlgorithm\nTime to execute (micros)\nFPS\n\n\n\n\nNone\n0\n25\n\n\nNearest neighbor\n160\n25\n\n\nCross block\n700\n25\n\n\nCore block\n800\n25\n\n\nDiagonal block\n950\n25\n\n\nFull block\n4900\n12\n\n\n\nAs you can see, only the full block creates a delay in the process (quite a bit of delay even): the other methods won't slow down your program in any noticeable way.\nIf you test Nearest neighbor and it works for you, then you'll be extremely light on computation resources with only 160 microseconds of delay.\nThis is what I mean by blazing fast.\nMotion detection\nThe motion detection part hasn't changed, so I point you to the original post to read more about the Block difference threshold and the Image difference threshold.\nFull code\n#define CAMERA_MODEL_M5STACK_WIDE\n#include "EloquentVision.h"\n\n#define FRAME_SIZE FRAMESIZE_QVGA\n#define SOURCE_WIDTH 320\n#define SOURCE_HEIGHT 240\n#define BLOCK_SIZE 10\n#define DEST_WIDTH (SOURCE_WIDTH / BLOCK_SIZE)\n#define DEST_HEIGHT (SOURCE_HEIGHT / BLOCK_SIZE)\n#define BLOCK_DIFF_THRESHOLD 0.2\n#define IMAGE_DIFF_THRESHOLD 0.1\n#define DEBUG 0\n\nusing namespace Eloquent::Vision;\n\nESP32Camera camera;\nuint8_t prevFrame[DEST_WIDTH * DEST_HEIGHT] = { 0 };\nuint8_t currentFrame[DEST_WIDTH * DEST_HEIGHT] = { 0 };\n\n// function prototypes\nbool motionDetect();\nvoid updateFrame();\n\n/**\n *\n */\nvoid setup() {\n Serial.begin(115200);\n camera.begin(FRAME_SIZE, PIXFORMAT_GRAYSCALE);\n}\n\n/**\n *\n */\nvoid loop() {\n /**\n * Algorithm:\n * 1. grab frame\n * 2. compare with previous to detect motion\n * 3. update previous frame\n */\n\n time_t start = millis();\n camera_fb_t *frame = camera.capture();\n\n downscaleImage(frame->buf, currentFrame, nearest, SOURCE_WIDTH, SOURCE_HEIGHT, BLOCK_SIZE);\n\n if (motionDetect()) {\n Serial.print("Motion detected @ ");\n Serial.print(floor(1000.0f / (millis() - start)));\n Serial.println(" FPS");\n }\n\n updateFrame();\n}\n\n/**\n * Compute the number of different blocks\n * If there are enough, then motion happened\n */\nbool motionDetect() {\n uint16_t changes = 0;\n const uint16_t blocks = DEST_WIDTH * DEST_HEIGHT;\n\n for (int y = 0; y < DEST_HEIGHT; y++) {\n for (int x = 0; x < DEST_WIDTH; x++) {\n float current = currentFrame[y * DEST_WIDTH + x];\n float prev = prevFrame[y * DEST_WIDTH + x];\n float delta = abs(current - prev) / prev;\n\n if (delta >= BLOCK_DIFF_THRESHOLD)\n changes += 1;\n }\n }\n\n return (1.0 * changes / blocks) > IMAGE_DIFF_THRESHOLD;\n}\n\n/**\n * Copy current frame to previous\n */\nvoid updateFrame() {\n memcpy(prevFrame, currentFrame, DEST_WIDTH * DEST_HEIGHT);\n}\n\nCheck the full project code on Github and remember to star!\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nL'articolo Easier, faster pure video ESP32 cam motion detection proviene da Eloquent Arduino Blog.", "date_published": "2020-05-10T21:26:08+02:00", "date_modified": "2020-05-13T21:19:35+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "camera", "esp32", "Computer vision" ] }, { "id": "https://eloquentarduino.github.io/?p=956", "url": "https://eloquentarduino.github.io/2020/02/easy-arduino-thermal-camera-with-ascii-video-streaming/", "title": "Easy Arduino thermal camera with (ASCII) video streaming", "content_html": "

Ever wanted to use your thermal camera with Arduino but found it difficult to go beyond the tutorials code? Let's see the easiest possible way to view your thermal camera streaming without an LCD display!

\n

\"Arduino

\n

\n

MLX90640 thermal camera

\n

For Arduino there are essentially two thermal camera available: the AMG8833 and the MLX90640.

\n

The AMG8833 is 8x8 and the MLX90640 is 32x24.

\n

They're not cheap, it is true.

\n

But if you have to spend money, I strongly advise you to buy the MLX90640: I have one and it's not that accurate. I can't imagine how low definition would be the AMG8833.

\n

If you want to actually get something meaningful from the camera, the AMG8833 won't give you any good results.

\n

Sure, you can do interpolation: interpolation would give you the impression you have a better definition, but you're just "inventing" values you don't actually have.

\n

For demo projects it could be enough. But for any serious application, spend 20$ more and buy an MLX90640.

\n

MLX90640 eloquent library

\n

As you may know if you read my previous posts, I strongly believe in "eloquent" code, that is code that's as easy as possible to read.

\n

How many lines do you think you need to read a MLX90640 camera? Well, not that much in fact.

\n
#include "EloquentMLX90640.h"\n\nusing namespace Eloquent::Sensors;\n\nfloat buffer[768];\nMLX90640 camera;\n\nvoid setup() {\n  Serial.begin(115200);\n\n  if (!camera.begin()) {\n    Serial.println("Init error");\n    delay(50000);\n  }\n}\n\nvoid loop() {\n  camera.read(buffer);\n  delay(3000);\n}
\n

If you skip the declaration lines, you only need a begin() and read() call.

\n

That's it.

\n

What begin() does is to run all of the boilerplate code I mentioned earlier (checking the connection and initializing the parameters).

\n

read() populates the buffer you pass as argument with the temperature readings.

\n

From now on, you're free to handle that array as you may like: this is the most flexible way for the library to handle any use-case. It simply does not pose any restriction.

\n

You can find the camera code at the end of the page or on Github.

\n

Printing as ASCII Art

\n

Now that you have this data, you may want to actually "view" it. Well, that's not an easy task as one may hope.

\n

You will need an LCD if you want to create a standalone product. If you have one, it'll be the best, it's a really cute project to build.

\n

Here's a video from Adafruit that showcases even a 3D-printed case.

\n

\n

If you don't have an LCD, though, it is less practical to access your image.

\n

I did this in the past, and it meant creating a Python script reading the serial port every second and updating a plot.
\nIt works, sure, but it's not the most convenient way to handle it.

\n

This is the reason I thought about ASCII art: it is used to draw images in plain text, so you can view them directly in the serial monitor.

\n

Of course they will not be as accurate or representative as RGB images, but can give you an idea of what you're framing in realtime.

\n

I wrote a class to do this. Once imported in your sketch, it is super easy to get it working.

\n
#include "EloquentAsciiArt.h"\n\nusing namespace Eloquent::ImageProcessing;\n\nfloat buffer[768];\nuint8_t bufferBytes[768];\nMLX90640 camera;\n// we need to specify width and height of the image\nAsciiArt<32, 24> art(bufferBytes);\n\nvoid loop() {\n  camera.read(buffer);\n\n  // convert float image to uint8\n  for (size_t i = 0; i < 768; i++) {\n    // assumes readings are in the range 0-40 degrees\n    // change as per your need\n    bufferBytes[i] = map(buffer[i], 0, 40, 0, 255);\n  }\n\n  // print to Serial with a border of 2 characters, to distinguish one image from the next\n  art.print(&Serial, 2);\n  delay(2000);\n}
\n

As you can see, you need to create an AsciiArt object, map the image pixels in the range 0-255 and call the print() method: easy peasy!

\n

You can find the ASCII art generator code at the end of the page or on Github.

\n

Here's the result of the sketch. It's a video of me putting my arms at the top of my head, once at a time, then standing up.

\n
Resize the Serial Monitor as only a single frame at a time is visble to have a \"video streaming\" effect
\n
\n
\n

Of course the visual effect won't be as impressive as an RGB image, but you can clearly see my figure moving.

\n

The real bad part is the "glitch" you see between each frame when the scrolling happens: this is something I don't know if it's possible to mitigate.

\n
\r\n

Check the full project code on Github

\n
\n
\n
#pragma once\n\n#include "Wire.h"\n#include "MLX90640_API.h"\n#include "MLX90640_I2C_Driver.h"\n\n#ifndef TA_SHIFT\n//Default shift for MLX90640 in open air\n#define TA_SHIFT 8\n#endif\n\nnamespace Eloquent {\n    namespace Sensors {\n\n        enum class MLX90640Status {\n            OK,\n            NOT_CONNECTED,\n            DUMP_ERROR,\n            PARAMETER_ERROR,\n            FRAME_ERROR\n        };\n\n        class MLX90640 {\n        public:\n            /**\n             *\n             * @param address\n             */\n            MLX90640(uint8_t address = 0x33) :\n                _address(address),\n                _status(MLX90640Status::OK) {\n\n            }\n\n            /**\n             *\n             * @return\n             */\n            bool begin() {\n                Wire.begin();\n                Wire.setClock(400000);\n\n                return isConnected() && loadParams();\n            }\n\n            /**\n             *\n             * @return\n             */\n            bool read(float result[768]) {\n                for (byte x = 0 ; x < 2 ; x++) {\n                    uint16_t frame[834];\n                    int status = MLX90640_GetFrameData(_address, frame);\n\n                    if (status < 0)\n                        return fail(MLX90640Status::FRAME_ERROR);\n\n                    float vdd = MLX90640_GetVdd(frame, &_params);\n                    float Ta = MLX90640_GetTa(frame, &_params);\n                    float tr = Ta - TA_SHIFT;\n                    float emissivity = 0.95;\n\n                    MLX90640_CalculateTo(frame, &_params, emissivity, tr, result);\n                }\n            }\n\n        protected:\n            uint8_t _address;\n            paramsMLX90640 _params;\n            MLX90640Status _status;\n\n            /**\n             * Test if device is connected\n             * @return\n             */\n            bool isConnected() {\n                Wire.beginTransmission(_address);\n\n                if (Wire.endTransmission() == 0) {\n                    return true;\n                }\n\n                return fail(MLX90640Status::NOT_CONNECTED);\n            }\n\n            /**\n             *\n             * @return\n             */\n            bool loadParams() {\n                uint16_t ee[832];\n                int status = MLX90640_DumpEE(_address, ee);\n\n                if (status != 0)\n                    return fail(MLX90640Status::DUMP_ERROR);\n\n                status = MLX90640_ExtractParameters(ee, &_params);\n\n                if (status != 0)\n                    return fail(MLX90640Status::PARAMETER_ERROR);\n\n                return true;\n            }\n\n            /**\n             * Mark a failure\n             * @param status\n             * @return\n             */\n            bool fail(MLX90640Status status) {\n                _status = status;\n\n                return false;\n            }\n        };\n    }\n}
\n
\n
#pragma once\n\n#include "Stream.h"\n\nnamespace Eloquent {\n    namespace ImageProcessing {\n\n        /**\n         *\n         * @tparam width\n         * @tparam height\n         */\n        template<size_t width, size_t height>\n        class AsciiArt {\n        public:\n            AsciiArt(const uint8_t *data) {\n                _data = data;\n            }\n\n            /**\n             * Get pixel at given coordinates\n             * @param x\n             * @param y\n             * @return\n             */\n            uint8_t at(size_t x, size_t y) {\n                return _data[y * width + x];\n            }\n\n            /**\n             * Print as ASCII art picture\n             * @param stream\n             */\n            void print(Stream *stream, uint8_t frameSize = 0) {\n                const char glyphs[] = " .,:;xyYX";\n                const uint8_t glyphsCount = 9;\n\n                printAsciiArtHorizontalFrame(stream, frameSize);\n\n                for (size_t y = 0; y < height; y++) {\n                    // vertical frame\n                    for (uint8_t k = 0; k < frameSize; k++)\n                        Serial.print('|');\n\n                    for (size_t x = 0; x < width; x++) {\n                        const uint8_t glyph = floor(((uint16_t) at(x, y)) * glyphsCount / 256);\n\n                        stream->print(glyphs[glyph]);\n                    }\n\n                    // vertical frame\n                    for (uint8_t k = 0; k < frameSize; k++)\n                        Serial.print('|');\n\n                    stream->print('\\n');\n                }\n\n                printAsciiArtHorizontalFrame(stream, frameSize);\n                stream->flush();\n            }\n\n        protected:\n            const uint8_t *_data;\n\n            /**\n             *\n             * @param stream\n             * @param frameSize\n             */\n            void printAsciiArtHorizontalFrame(Stream *stream, uint8_t frameSize) {\n                for (uint8_t i = 0; i < frameSize; i++) {\n                    for (size_t j = 0; j < width + 2 * frameSize; j++)\n                        stream->print('-');\n                    stream->print('\\n');\n                }\n            }\n        };\n    }\n}
\n

L'articolo Easy Arduino thermal camera with (ASCII) video streaming proviene da Eloquent Arduino Blog.

\n", "content_text": "Ever wanted to use your thermal camera with Arduino but found it difficult to go beyond the tutorials code? Let's see the easiest possible way to view your thermal camera streaming without an LCD display!\n\n\nMLX90640 thermal camera\nFor Arduino there are essentially two thermal camera available: the AMG8833 and the MLX90640.\nThe AMG8833 is 8x8 and the MLX90640 is 32x24.\nThey're not cheap, it is true.\nBut if you have to spend money, I strongly advise you to buy the MLX90640: I have one and it's not that accurate. I can't imagine how low definition would be the AMG8833.\nIf you want to actually get something meaningful from the camera, the AMG8833 won't give you any good results.\nSure, you can do interpolation: interpolation would give you the impression you have a better definition, but you're just "inventing" values you don't actually have.\nFor demo projects it could be enough. But for any serious application, spend 20$ more and buy an MLX90640.\nMLX90640 eloquent library\nAs you may know if you read my previous posts, I strongly believe in "eloquent" code, that is code that's as easy as possible to read.\nHow many lines do you think you need to read a MLX90640 camera? Well, not that much in fact.\n#include "EloquentMLX90640.h"\n\nusing namespace Eloquent::Sensors;\n\nfloat buffer[768];\nMLX90640 camera;\n\nvoid setup() {\n Serial.begin(115200);\n\n if (!camera.begin()) {\n Serial.println("Init error");\n delay(50000);\n }\n}\n\nvoid loop() {\n camera.read(buffer);\n delay(3000);\n}\nIf you skip the declaration lines, you only need a begin() and read() call.\nThat's it.\nWhat begin() does is to run all of the boilerplate code I mentioned earlier (checking the connection and initializing the parameters).\nread() populates the buffer you pass as argument with the temperature readings.\nFrom now on, you're free to handle that array as you may like: this is the most flexible way for the library to handle any use-case. It simply does not pose any restriction.\nYou can find the camera code at the end of the page or on Github.\nPrinting as ASCII Art\nNow that you have this data, you may want to actually "view" it. Well, that's not an easy task as one may hope.\nYou will need an LCD if you want to create a standalone product. If you have one, it'll be the best, it's a really cute project to build.\nHere's a video from Adafruit that showcases even a 3D-printed case.\n\nIf you don't have an LCD, though, it is less practical to access your image.\nI did this in the past, and it meant creating a Python script reading the serial port every second and updating a plot.\nIt works, sure, but it's not the most convenient way to handle it.\nThis is the reason I thought about ASCII art: it is used to draw images in plain text, so you can view them directly in the serial monitor.\nOf course they will not be as accurate or representative as RGB images, but can give you an idea of what you're framing in realtime.\nI wrote a class to do this. Once imported in your sketch, it is super easy to get it working.\n#include "EloquentAsciiArt.h"\n\nusing namespace Eloquent::ImageProcessing;\n\nfloat buffer[768];\nuint8_t bufferBytes[768];\nMLX90640 camera;\n// we need to specify width and height of the image\nAsciiArt<32, 24> art(bufferBytes);\n\nvoid loop() {\n camera.read(buffer);\n\n // convert float image to uint8\n for (size_t i = 0; i < 768; i++) {\n // assumes readings are in the range 0-40 degrees\n // change as per your need\n bufferBytes[i] = map(buffer[i], 0, 40, 0, 255);\n }\n\n // print to Serial with a border of 2 characters, to distinguish one image from the next\n art.print(&Serial, 2);\n delay(2000);\n}\nAs you can see, you need to create an AsciiArt object, map the image pixels in the range 0-255 and call the print() method: easy peasy!\nYou can find the ASCII art generator code at the end of the page or on Github.\nHere's the result of the sketch. It's a video of me putting my arms at the top of my head, once at a time, then standing up.\nResize the Serial Monitor as only a single frame at a time is visble to have a \"video streaming\" effect\n\nhttps://eloquentarduino.github.io/wp-content/uploads/2020/02/Thermal-ascii-speedup.mp4\nOf course the visual effect won't be as impressive as an RGB image, but you can clearly see my figure moving.\nThe real bad part is the "glitch" you see between each frame when the scrolling happens: this is something I don't know if it's possible to mitigate.\n\r\nCheck the full project code on Github\n\n\n#pragma once\n\n#include "Wire.h"\n#include "MLX90640_API.h"\n#include "MLX90640_I2C_Driver.h"\n\n#ifndef TA_SHIFT\n//Default shift for MLX90640 in open air\n#define TA_SHIFT 8\n#endif\n\nnamespace Eloquent {\n namespace Sensors {\n\n enum class MLX90640Status {\n OK,\n NOT_CONNECTED,\n DUMP_ERROR,\n PARAMETER_ERROR,\n FRAME_ERROR\n };\n\n class MLX90640 {\n public:\n /**\n *\n * @param address\n */\n MLX90640(uint8_t address = 0x33) :\n _address(address),\n _status(MLX90640Status::OK) {\n\n }\n\n /**\n *\n * @return\n */\n bool begin() {\n Wire.begin();\n Wire.setClock(400000);\n\n return isConnected() && loadParams();\n }\n\n /**\n *\n * @return\n */\n bool read(float result[768]) {\n for (byte x = 0 ; x < 2 ; x++) {\n uint16_t frame[834];\n int status = MLX90640_GetFrameData(_address, frame);\n\n if (status < 0)\n return fail(MLX90640Status::FRAME_ERROR);\n\n float vdd = MLX90640_GetVdd(frame, &_params);\n float Ta = MLX90640_GetTa(frame, &_params);\n float tr = Ta - TA_SHIFT;\n float emissivity = 0.95;\n\n MLX90640_CalculateTo(frame, &_params, emissivity, tr, result);\n }\n }\n\n protected:\n uint8_t _address;\n paramsMLX90640 _params;\n MLX90640Status _status;\n\n /**\n * Test if device is connected\n * @return\n */\n bool isConnected() {\n Wire.beginTransmission(_address);\n\n if (Wire.endTransmission() == 0) {\n return true;\n }\n\n return fail(MLX90640Status::NOT_CONNECTED);\n }\n\n /**\n *\n * @return\n */\n bool loadParams() {\n uint16_t ee[832];\n int status = MLX90640_DumpEE(_address, ee);\n\n if (status != 0)\n return fail(MLX90640Status::DUMP_ERROR);\n\n status = MLX90640_ExtractParameters(ee, &_params);\n\n if (status != 0)\n return fail(MLX90640Status::PARAMETER_ERROR);\n\n return true;\n }\n\n /**\n * Mark a failure\n * @param status\n * @return\n */\n bool fail(MLX90640Status status) {\n _status = status;\n\n return false;\n }\n };\n }\n}\n\n#pragma once\n\n#include "Stream.h"\n\nnamespace Eloquent {\n namespace ImageProcessing {\n\n /**\n *\n * @tparam width\n * @tparam height\n */\n template<size_t width, size_t height>\n class AsciiArt {\n public:\n AsciiArt(const uint8_t *data) {\n _data = data;\n }\n\n /**\n * Get pixel at given coordinates\n * @param x\n * @param y\n * @return\n */\n uint8_t at(size_t x, size_t y) {\n return _data[y * width + x];\n }\n\n /**\n * Print as ASCII art picture\n * @param stream\n */\n void print(Stream *stream, uint8_t frameSize = 0) {\n const char glyphs[] = " .,:;xyYX";\n const uint8_t glyphsCount = 9;\n\n printAsciiArtHorizontalFrame(stream, frameSize);\n\n for (size_t y = 0; y < height; y++) {\n // vertical frame\n for (uint8_t k = 0; k < frameSize; k++)\n Serial.print('|');\n\n for (size_t x = 0; x < width; x++) {\n const uint8_t glyph = floor(((uint16_t) at(x, y)) * glyphsCount / 256);\n\n stream->print(glyphs[glyph]);\n }\n\n // vertical frame\n for (uint8_t k = 0; k < frameSize; k++)\n Serial.print('|');\n\n stream->print('\\n');\n }\n\n printAsciiArtHorizontalFrame(stream, frameSize);\n stream->flush();\n }\n\n protected:\n const uint8_t *_data;\n\n /**\n *\n * @param stream\n * @param frameSize\n */\n void printAsciiArtHorizontalFrame(Stream *stream, uint8_t frameSize) {\n for (uint8_t i = 0; i < frameSize; i++) {\n for (size_t j = 0; j < width + 2 * frameSize; j++)\n stream->print('-');\n stream->print('\\n');\n }\n }\n };\n }\n}\nL'articolo Easy Arduino thermal camera with (ASCII) video streaming proviene da Eloquent Arduino Blog.", "date_published": "2020-02-29T17:20:15+01:00", "date_modified": "2020-03-02T20:19:00+01:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "Computer vision", "Electronics", "Eloquent library" ], "attachments": [ { "url": "https://eloquentarduino.github.io/wp-content/uploads/2020/02/Thermal-ascii-speedup.mp4", "mime_type": "video/mp4", "size_in_bytes": 479591 } ] }, { "id": "https://eloquentarduino.github.io/?p=931", "url": "https://eloquentarduino.github.io/2020/02/handwritten-digit-classification-with-arduino-and-microml/", "title": "Handwritten digit classification with Arduino and MicroML", "content_html": "

We continue exploring the endless possibilities on the MicroML (Machine Learning for Microcontrollers) framework on Arduino and ESP32 boards: in this post we're back to image classification. In particular, we'll distinguish handwritten digits using an ESP32 camera.

\n

\"Arduino

\n

\n

If this is the first time you're reading my blog, you may have missed that I'm on a journey to push the limits of Machine learning on embedded devices like the Arduino boards and ESP32.

\n

I started with accelerometer data classification, then did Wifi indoor positioning as a proof of concept.

\n

In the last weeks, though, I undertook a more difficult path that is image classification.

\n

Image classification is where Convolutional Neural Networks really shine, but I'm here to question this settlement and demostrate that it is possible to come up with much lighter alternatives.

\n

In this post we continue with the examples, replicating a "benchmark" dataset in Machine learning: the handwritten digits classification.

\n
\nIf you are curious about a specific image classification task you would like to see implemented, let me know in the comments: I'm always open to new ideas\n
\n

The task

\n

The objective of this example is to be able to tell what an handwritten digit is, taking as input a photo from the ESP32 camera.

\n

In particular, we have 3 handwritten numbers and the task of our model will be to distinguish which image is what number.

\n

\"Handwritten

\n

I only have a single image per digit, but you're free to draw as many samples as you like: it should help improve the performance of you're classifier.

\n

1. Feature extraction

\n

When dealing with images, if you use a CNN this step is often overlooked: CNNs are made on purpose to handle raw pixel values, so you just throw the image in and it is handled properly.

\n

When using other types of classifiers, it could help add a bit of feature engineering to help the classifier doing its job and achieve high accuracy.

\n

But not this time.

\n

I wanted to be as "light" as possible in this demo, so I only took a couple steps during the feature acquisition:

\n
    \n
  1. use a grayscale image
  2. \n
  3. downsample to a manageable size
  4. \n
  5. convert it to black/white with a threshold
  6. \n
\n

I would hardly call this feature engineering.

\n

This is an example of the result of this pipeline.

\n

\"Handwritten

\n

The code for this pipeline is really simple and is almost the same from the example on motion detection.

\n
#include "esp_camera.h"\n\n#define PWDN_GPIO_NUM     -1\n#define RESET_GPIO_NUM    15\n#define XCLK_GPIO_NUM     27\n#define SIOD_GPIO_NUM     22\n#define SIOC_GPIO_NUM     23\n#define Y9_GPIO_NUM       19\n#define Y8_GPIO_NUM       36\n#define Y7_GPIO_NUM       18\n#define Y6_GPIO_NUM       39\n#define Y5_GPIO_NUM        5\n#define Y4_GPIO_NUM       34\n#define Y3_GPIO_NUM       35\n#define Y2_GPIO_NUM       32\n#define VSYNC_GPIO_NUM    25\n#define HREF_GPIO_NUM     26\n#define PCLK_GPIO_NUM     21\n\n#define FRAME_SIZE FRAMESIZE_QQVGA\n#define WIDTH 160\n#define HEIGHT 120\n#define BLOCK_SIZE 5\n#define W (WIDTH / BLOCK_SIZE)\n#define H (HEIGHT / BLOCK_SIZE)\n#define THRESHOLD 127\n\ndouble features[H*W] = { 0 };\n\nvoid setup() {\n    Serial.begin(115200);\n    Serial.println(setup_camera(FRAME_SIZE) ? "OK" : "ERR INIT");\n    delay(3000);\n}\n\nvoid loop() {\n    if (!capture_still()) {\n        Serial.println("Failed capture");\n        delay(2000);\n        return;\n    }\n\n    print_features();\n    delay(3000);\n}\n\nbool setup_camera(framesize_t frameSize) {\n    camera_config_t config;\n\n    config.ledc_channel = LEDC_CHANNEL_0;\n    config.ledc_timer = LEDC_TIMER_0;\n    config.pin_d0 = Y2_GPIO_NUM;\n    config.pin_d1 = Y3_GPIO_NUM;\n    config.pin_d2 = Y4_GPIO_NUM;\n    config.pin_d3 = Y5_GPIO_NUM;\n    config.pin_d4 = Y6_GPIO_NUM;\n    config.pin_d5 = Y7_GPIO_NUM;\n    config.pin_d6 = Y8_GPIO_NUM;\n    config.pin_d7 = Y9_GPIO_NUM;\n    config.pin_xclk = XCLK_GPIO_NUM;\n    config.pin_pclk = PCLK_GPIO_NUM;\n    config.pin_vsync = VSYNC_GPIO_NUM;\n    config.pin_href = HREF_GPIO_NUM;\n    config.pin_sscb_sda = SIOD_GPIO_NUM;\n    config.pin_sscb_scl = SIOC_GPIO_NUM;\n    config.pin_pwdn = PWDN_GPIO_NUM;\n    config.pin_reset = RESET_GPIO_NUM;\n    config.xclk_freq_hz = 20000000;\n    config.pixel_format = PIXFORMAT_GRAYSCALE;\n    config.frame_size = frameSize;\n    config.jpeg_quality = 12;\n    config.fb_count = 1;\n\n    bool ok = esp_camera_init(&config) == ESP_OK;\n\n    sensor_t *sensor = esp_camera_sensor_get();\n    sensor->set_framesize(sensor, frameSize);\n\n    return ok;\n}\n\nbool capture_still() {\n    camera_fb_t *frame = esp_camera_fb_get();\n\n    if (!frame)\n        return false;\n\n    // reset all the features\n    for (size_t i = 0; i < H * W; i++)\n      features[i] = 0;\n\n    // for each pixel, compute the position in the downsampled image\n    for (size_t i = 0; i < frame->len; i++) {\n      const uint16_t x = i % WIDTH;\n      const uint16_t y = floor(i / WIDTH);\n      const uint8_t block_x = floor(x / BLOCK_SIZE);\n      const uint8_t block_y = floor(y / BLOCK_SIZE);\n      const uint16_t j = block_y * W + block_x;\n\n      features[j] += frame->buf[i];\n    }\n\n    // apply threshold\n    for (size_t i = 0; i < H * W; i++) {\n      features[i] = (features[i] / (BLOCK_SIZE * BLOCK_SIZE) > THRESHOLD) ? 1 : 0;\n    }\n\n    return true;\n}\n\nvoid print_features() {\n    for (size_t i = 0; i < H * W; i++) {\n        Serial.print(features[i]);\n\n        if (i != H * W - 1)\n          Serial.print(',');\n    }\n\n    Serial.println();\n}
\n

2. Samples recording

\n

To create your own dataset, you need a collection of handwritten digits.

\n

You can do this part as you like, by using pieces of paper or a monitor. I used a tablet because it was well illuminated and I could open a bunch of tabs to keep a record of my samples.

\n

As in the apple vs orange, keep in mind that you should be consistent during both the training phase and the inference phase.

\n

This is why I used tape to fix my ESP32 camera to the desk and kept the tablet in the exact same position.

\n

If you desire, you could experiment varying slightly the capturing setup during the training and see if your classifier still achieves good accuracy: this is a test I didn't make.

\n

3. Train and export the classifier

\r\n\r\n

For a detailed guide refer to the tutorial

\r\n\r\n

\r\n

from sklearn.ensemble import RandomForestClassifier\r\nfrom micromlgen import port\r\n\r\n# put your samples in the dataset folder\r\n# one class per file\r\n# one feature vector per line, in CSV format\r\nfeatures, classmap = load_features('dataset/')\r\nX, y = features[:, :-1], features[:, -1]\r\nclassifier = RandomForestClassifier(n_estimators=30, max_depth=10).fit(X, y)\r\nc_code = port(classifier, classmap=classmap)\r\nprint(c_code)
\r\n\r\n

At this point you have to copy the printed code and import it in your Arduino project, in a file called model.h.

\n

4. The result

\n

Okay, at this point you should have all the working pieces to do handwritten digit image classification on your ESP32 camera. Include your model in the sketch and run the classification.

\n
#include "model.h"\n\nvoid loop() {\n    if (!capture_still()) {\n        Serial.println("Failed capture");\n        delay(2000);\n\n        return;\n    }\n\n    Serial.print("Number: ");\n    Serial.println(classIdxToName(predict(features)));\n    delay(3000);\n}
\n

Done.

\n

You can see a demo of my results in the video below.

\n
\n

Project figures

\n

My dataset is composed of 25 training samples in total and the SVM with linear kernel produced 17 support vectors.

\n

On my M5Stick camera board, the overhead for the model is 6.8 Kb of flash and the inference takes 7ms: not that bad!

\n
\r\n

Check the full project code on Github

\n

L'articolo Handwritten digit classification with Arduino and MicroML proviene da Eloquent Arduino Blog.

\n", "content_text": "We continue exploring the endless possibilities on the MicroML (Machine Learning for Microcontrollers) framework on Arduino and ESP32 boards: in this post we're back to image classification. In particular, we'll distinguish handwritten digits using an ESP32 camera.\n\n\nIf this is the first time you're reading my blog, you may have missed that I'm on a journey to push the limits of Machine learning on embedded devices like the Arduino boards and ESP32.\nI started with accelerometer data classification, then did Wifi indoor positioning as a proof of concept.\nIn the last weeks, though, I undertook a more difficult path that is image classification.\nImage classification is where Convolutional Neural Networks really shine, but I'm here to question this settlement and demostrate that it is possible to come up with much lighter alternatives.\nIn this post we continue with the examples, replicating a "benchmark" dataset in Machine learning: the handwritten digits classification.\n\nIf you are curious about a specific image classification task you would like to see implemented, let me know in the comments: I'm always open to new ideas\n\nThe task\nThe objective of this example is to be able to tell what an handwritten digit is, taking as input a photo from the ESP32 camera.\nIn particular, we have 3 handwritten numbers and the task of our model will be to distinguish which image is what number.\n\nI only have a single image per digit, but you're free to draw as many samples as you like: it should help improve the performance of you're classifier.\n1. Feature extraction\nWhen dealing with images, if you use a CNN this step is often overlooked: CNNs are made on purpose to handle raw pixel values, so you just throw the image in and it is handled properly.\nWhen using other types of classifiers, it could help add a bit of feature engineering to help the classifier doing its job and achieve high accuracy.\nBut not this time.\nI wanted to be as "light" as possible in this demo, so I only took a couple steps during the feature acquisition:\n\nuse a grayscale image\ndownsample to a manageable size\nconvert it to black/white with a threshold\n\nI would hardly call this feature engineering.\nThis is an example of the result of this pipeline.\n\nThe code for this pipeline is really simple and is almost the same from the example on motion detection.\n#include "esp_camera.h"\n\n#define PWDN_GPIO_NUM -1\n#define RESET_GPIO_NUM 15\n#define XCLK_GPIO_NUM 27\n#define SIOD_GPIO_NUM 22\n#define SIOC_GPIO_NUM 23\n#define Y9_GPIO_NUM 19\n#define Y8_GPIO_NUM 36\n#define Y7_GPIO_NUM 18\n#define Y6_GPIO_NUM 39\n#define Y5_GPIO_NUM 5\n#define Y4_GPIO_NUM 34\n#define Y3_GPIO_NUM 35\n#define Y2_GPIO_NUM 32\n#define VSYNC_GPIO_NUM 25\n#define HREF_GPIO_NUM 26\n#define PCLK_GPIO_NUM 21\n\n#define FRAME_SIZE FRAMESIZE_QQVGA\n#define WIDTH 160\n#define HEIGHT 120\n#define BLOCK_SIZE 5\n#define W (WIDTH / BLOCK_SIZE)\n#define H (HEIGHT / BLOCK_SIZE)\n#define THRESHOLD 127\n\ndouble features[H*W] = { 0 };\n\nvoid setup() {\n Serial.begin(115200);\n Serial.println(setup_camera(FRAME_SIZE) ? "OK" : "ERR INIT");\n delay(3000);\n}\n\nvoid loop() {\n if (!capture_still()) {\n Serial.println("Failed capture");\n delay(2000);\n return;\n }\n\n print_features();\n delay(3000);\n}\n\nbool setup_camera(framesize_t frameSize) {\n camera_config_t config;\n\n config.ledc_channel = LEDC_CHANNEL_0;\n config.ledc_timer = LEDC_TIMER_0;\n config.pin_d0 = Y2_GPIO_NUM;\n config.pin_d1 = Y3_GPIO_NUM;\n config.pin_d2 = Y4_GPIO_NUM;\n config.pin_d3 = Y5_GPIO_NUM;\n config.pin_d4 = Y6_GPIO_NUM;\n config.pin_d5 = Y7_GPIO_NUM;\n config.pin_d6 = Y8_GPIO_NUM;\n config.pin_d7 = Y9_GPIO_NUM;\n config.pin_xclk = XCLK_GPIO_NUM;\n config.pin_pclk = PCLK_GPIO_NUM;\n config.pin_vsync = VSYNC_GPIO_NUM;\n config.pin_href = HREF_GPIO_NUM;\n config.pin_sscb_sda = SIOD_GPIO_NUM;\n config.pin_sscb_scl = SIOC_GPIO_NUM;\n config.pin_pwdn = PWDN_GPIO_NUM;\n config.pin_reset = RESET_GPIO_NUM;\n config.xclk_freq_hz = 20000000;\n config.pixel_format = PIXFORMAT_GRAYSCALE;\n config.frame_size = frameSize;\n config.jpeg_quality = 12;\n config.fb_count = 1;\n\n bool ok = esp_camera_init(&config) == ESP_OK;\n\n sensor_t *sensor = esp_camera_sensor_get();\n sensor->set_framesize(sensor, frameSize);\n\n return ok;\n}\n\nbool capture_still() {\n camera_fb_t *frame = esp_camera_fb_get();\n\n if (!frame)\n return false;\n\n // reset all the features\n for (size_t i = 0; i < H * W; i++)\n features[i] = 0;\n\n // for each pixel, compute the position in the downsampled image\n for (size_t i = 0; i < frame->len; i++) {\n const uint16_t x = i % WIDTH;\n const uint16_t y = floor(i / WIDTH);\n const uint8_t block_x = floor(x / BLOCK_SIZE);\n const uint8_t block_y = floor(y / BLOCK_SIZE);\n const uint16_t j = block_y * W + block_x;\n\n features[j] += frame->buf[i];\n }\n\n // apply threshold\n for (size_t i = 0; i < H * W; i++) {\n features[i] = (features[i] / (BLOCK_SIZE * BLOCK_SIZE) > THRESHOLD) ? 1 : 0;\n }\n\n return true;\n}\n\nvoid print_features() {\n for (size_t i = 0; i < H * W; i++) {\n Serial.print(features[i]);\n\n if (i != H * W - 1)\n Serial.print(',');\n }\n\n Serial.println();\n}\n2. Samples recording\nTo create your own dataset, you need a collection of handwritten digits.\nYou can do this part as you like, by using pieces of paper or a monitor. I used a tablet because it was well illuminated and I could open a bunch of tabs to keep a record of my samples.\nAs in the apple vs orange, keep in mind that you should be consistent during both the training phase and the inference phase.\nThis is why I used tape to fix my ESP32 camera to the desk and kept the tablet in the exact same position.\nIf you desire, you could experiment varying slightly the capturing setup during the training and see if your classifier still achieves good accuracy: this is a test I didn't make.\n3. Train and export the classifier\r\n\r\nFor a detailed guide refer to the tutorial\r\n\r\n\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom micromlgen import port\r\n\r\n# put your samples in the dataset folder\r\n# one class per file\r\n# one feature vector per line, in CSV format\r\nfeatures, classmap = load_features('dataset/')\r\nX, y = features[:, :-1], features[:, -1]\r\nclassifier = RandomForestClassifier(n_estimators=30, max_depth=10).fit(X, y)\r\nc_code = port(classifier, classmap=classmap)\r\nprint(c_code)\r\n\r\nAt this point you have to copy the printed code and import it in your Arduino project, in a file called model.h.\n4. The result\nOkay, at this point you should have all the working pieces to do handwritten digit image classification on your ESP32 camera. Include your model in the sketch and run the classification.\n#include "model.h"\n\nvoid loop() {\n if (!capture_still()) {\n Serial.println("Failed capture");\n delay(2000);\n\n return;\n }\n\n Serial.print("Number: ");\n Serial.println(classIdxToName(predict(features)));\n delay(3000);\n}\nDone.\nYou can see a demo of my results in the video below.\nhttps://eloquentarduino.github.io/wp-content/uploads/2020/02/MNIST-mute.mp4\nProject figures\nMy dataset is composed of 25 training samples in total and the SVM with linear kernel produced 17 support vectors.\nOn my M5Stick camera board, the overhead for the model is 6.8 Kb of flash and the inference takes 7ms: not that bad!\n\r\nCheck the full project code on Github\nL'articolo Handwritten digit classification with Arduino and MicroML proviene da Eloquent Arduino Blog.", "date_published": "2020-02-23T11:53:03+01:00", "date_modified": "2020-05-31T18:50:44+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "camera", "esp32", "microml", "svm", "Arduino Machine learning", "Computer vision" ], "attachments": [ { "url": "https://eloquentarduino.github.io/wp-content/uploads/2020/02/MNIST-mute.mp4", "mime_type": "video/mp4", "size_in_bytes": 6424809 } ] }, { "id": "https://eloquentarduino.github.io/?p=820", "url": "https://eloquentarduino.github.io/2020/01/image-recognition-with-esp32-and-arduino/", "title": "Apple or Orange? Image recognition with ESP32 and Arduino", "content_html": "

Do you have an ESP32 camera?

\n

Want to do image recognition directly on your ESP32, without a PC?

\n

In this post we'll look into a very basic image recognition task: distinguish apples from oranges with machine learning.

\n

\"Apple

\n

\n

Image recognition is a very hot topic these days in the AI/ML landscape. Convolutional Neural Networks really shines in this task and can achieve almost perfect accuracy on many scenarios.

\n

Sadly, you can't run CNN on your ESP32, they're just too large for a microcontroller.

\n

Since in this series about Machine Learning on Microcontrollers we're exploring the potential of Support Vector Machines (SVMs) at solving different classification tasks, we'll take a look into image classification too.

\n

Table of contents
  1. What we're going to do
  2. Features definition
  3. Extracting RGB components
  4. Record samples image
  5. Training the classifier
  6. Real world example
    1. Disclaimer

\n

What we're going to do

\n

In a previous post about color identification with Machine learning, we used an Arduino to detect the object we were pointing at with a color sensor (TCS3200) by its color: if we detected yellow, for example, we knew we had a banana in front of us.

\n

Of course such a process is not object recognition at all: yellow may be a banane, or a lemon, or an apple.

\n

Object inference, in that case, works only if you have exactly one object for a given color.

\n

The objective of this post, instead, is to investigate if we can use the MicroML framework to do simple image recognition on the images from an ESP32 camera.

\n

This is much more similar to the tasks you do on your PC with CNN or any other form of NN you are comfortable with. Sure, we will still apply some restrictions to fit the problem on a microcontroller, but this is a huge step forward compared to the simple color identification.

\n
\nIn this context, image recognition means deciding which class (from the trained ones) the current image belongs to. This algorithm can't locate interesting objects in the image, neither detect if an object is present in the frame. It will classify the current image based on the samples recorded during training.\n
\n

As any beginning machine learning project about image classification worth of respect, our task will be to distinguish an orange from an apple.

\n

Features definition

\n

I have to admit that I rarely use NN, so I may be wrong here, but from the examples I read online it looks to me that features engineering is not a fundamental task with NN.

\n

Those few times I used CNN, I always used the whole image as input, as-is. I didn't extracted any feature from them (e.g. color histogram): the CNN worked perfectly fine with raw images.

\n

I don't think this will work best with SVM, but in this first post we're starting as simple as possible, so we'll be using the RGB components of the image as our features. In a future post, we'll introduce additional features to try to improve our results.

\n

I said we're using the RGB components of the image. But not all of them.

\n

Even at the lowest resolution of 160x120 pixels, a raw RGB image from the camera would generate 160x120x3 = 57600 features: way too much.

\n

We need to reduce this number to the bare minimum.

\n

How much pixels do you think are necessary to get reasonable results in this task of classifying apples from oranges?

\n

You would be surprised to know that I got 90% accuracy with an RGB image of 8x6!

\n

\"You

\n

Yes, that's all we really need to do a good enough classification.

\n

You can distinguish apples from oranges on ESP32 with 8x6 pixels only!
Click To Tweet


\n

Of course this is a tradeoff: you can't expect to achieve 99% accuracy while mantaining the model size small enough to fit on a microcontroller. 90% is an acceptable accuracy for me in this context.

\n

You have to keep in mind, moreover, that the features vector size grows quadratically with the image size (if you keep the aspect ratio). A raw RGB image of 8x6 generates 144 features: an image of 16x12 generates 576 features. This was already causing random crashes on my ESP32.

\n

So we'll stick to 8x6 images.

\n

Now, how do you compact a 160x120 image to 8x6? With downsampling.

\n

This is the same tecnique we've used in the post about motion detection on ESP32: we define a block size and average all the pixels inside the block to get a single value (you can refer to that post for more details).

\n

\"Image

\n

This time, though, we're working with RGB images instead of grayscale, so we'll repeat the exact same process 3 times, one for each channel.

\n

This is the code excerpt that does the downsampling.

\n
uint16_t rgb_frame[HEIGHT / BLOCK_SIZE][WIDTH / BLOCK_SIZE][3] = { 0 };\n\nvoid grab_image() {\n    for (size_t i = 0; i < len; i += 2) {\n        // get r, g, b from the buffer\n        // see later\n\n        const size_t j = i / 2;\n        // transform x, y in the original image to x, y in the downsampled image\n        // by dividing by BLOCK_SIZE\n        const uint16_t x = j % WIDTH;\n        const uint16_t y = floor(j / WIDTH);\n        const uint8_t block_x = floor(x / BLOCK_SIZE);\n        const uint8_t block_y = floor(y / BLOCK_SIZE);\n\n        // average pixels in block (accumulate)\n        rgb_frame[block_y][block_x][0] += r;\n        rgb_frame[block_y][block_x][1] += g;\n        rgb_frame[block_y][block_x][2] += b;\n    }\n}
\n\r\n
\r\n
\r\n
\r\n\t

Finding this content useful?

\r\n
\r\n\t\r\n
\r\n\t
\r\n\t\t
\r\n\t\t
\r\n\t
\r\n
\r\n
\r\n
\r\n
\r\n
\r\n\r\n\n

Extracting RGB components

\n

The ESP32 camera can store the image in different formats (of our interest \u2014 there are a couple more available):

\n
    \n
  1. grayscale: no color information, just the intensity is stored. The buffer has size HEIGHT*WIDTH
  2. \n
  3. RGB565: stores each RGB pixel in two bytes, with 5 bit for red, 6 for green and 5 for blue. The buffer has size HEIGHT * WIDTH * 2
  4. \n
  5. JPEG: encodes (in hardware?) the image to jpeg. The buffer has a variable length, based on the encoding results
  6. \n
\n

For our purpose, we'll use the RGB565 format and extract the 3 components from the 2 bytes with the following code.

\n

\"taken

\n
config.pixel_format = PIXFORMAT_RGB565;\n\nfor (size_t i = 0; i < len; i += 2) {\n    const uint8_t high = buf[i];\n    const uint8_t low  = buf[i+1];\n    const uint16_t pixel = (high << 8) | low;\n\n    const uint8_t r = (pixel & 0b1111100000000000) >> 11;\n    const uint8_t g = (pixel & 0b0000011111100000) >> 6;\n    const uint8_t b = (pixel & 0b0000000000011111);\n}
\n

Record samples image

\n

Now that we can grab the images from the camera, we'll need to take a few samples of each object we want to racognize.

\n

Before doing so, we'll linearize the image matrix to a 1-dimensional vector, because that's what our prediction function expects.

\n
#define H (HEIGHT / BLOCK_SIZE)\n#define W (WIDTH / BLOCK_SIZE)\n\nvoid linearize_features() {\n  size_t i = 0;\n  double features[H*W*3] = {0};\n\n  for (int y = 0; y < H; y++) {\n    for (int x = 0; x < W; x++) {\n      features[i++] = rgb_frame[y][x][0];\n      features[i++] = rgb_frame[y][x][1];\n      features[i++] = rgb_frame[y][x][2];\n    }\n  }\n\n  // print to serial\n  for (size_t i = 0; i < H*W*3; i++) {\n    Serial.print(features[i]);\n    Serial.print('\\t');\n  }\n\n  Serial.println();\n}
\n

Now you can setup your acquisition environment and take the samples: 15-20 of each object will do the job.

\n
\nImage acquisition is a very noisy process: even keeping the camera still, you will get fluctuating values.
You need to be very accurate during this phase if you want to achieve good results.
I suggest you immobilize your camera with tape to a flat surface or use some kind of photographic easel.\n
\n

Training the classifier

\n

To train the classifier, save the features for each object in a file, one features vector per line. Then follow the steps on how to train a ML classifier for Arduino to get the exported model.

\n

You can experiment with different classifier configurations.

\n

My features were well distinguishable, so I had great results (100% accuracy) with any kernel (even linear).

\n

One odd thing happened with the RBF kernel: I had to use an extremely low gamma value (0.0000001). Does anyone can explain me why? I usually go with a default value of 0.001.

\n

The model produced 13 support vectors.

\n

I did no features scaling: you could try it if classifying more than 2 classes and having poor results.

\n

\"Apple

\n

Real world example

\n

If you followed all the steps above, you should now have a model capable of detecting if your camera is shotting an apple or an orange, as you can see in the following video.

\n
\n

\n

The little white object you see at the bottom of the image is the camera, taped to the desk.

\n

Did you think it was possible to do simple image classification on your ESP32?

\n

Disclaimer

\n

This is not full-fledged object recognition: it can't label objects while you walk as Tensorflow can do, for example.

\n

You have to carefully craft your setup and be as consistent as possible between training and inferencing.

\n

Still, I think this is a fun proof-of-concept that can have useful applications in simple scenarios where you can live with a fixed camera and don't want to use a full Raspberry Pi.

\n

In the next weeks I settled to finally try TensorFlow Lite for Microcontrollers on my ESP32, so I'll try to do a comparison between them and this example and report my results.

\n

Now that you can do image classification on your ESP32, can you think of a use case you will be able to apply this code to?

\n

Let me know in the comments, we could even try realize it together if you need some help.

\n
\r\n

Check the full project code on Github

\n

L'articolo Apple or Orange? Image recognition with ESP32 and Arduino proviene da Eloquent Arduino Blog.

\n", "content_text": "Do you have an ESP32 camera? \nWant to do image recognition directly on your ESP32, without a PC?\nIn this post we'll look into a very basic image recognition task: distinguish apples from oranges with machine learning.\n\n\nImage recognition is a very hot topic these days in the AI/ML landscape. Convolutional Neural Networks really shines in this task and can achieve almost perfect accuracy on many scenarios.\nSadly, you can't run CNN on your ESP32, they're just too large for a microcontroller.\nSince in this series about Machine Learning on Microcontrollers we're exploring the potential of Support Vector Machines (SVMs) at solving different classification tasks, we'll take a look into image classification too.\nTable of contentsWhat we're going to doFeatures definitionExtracting RGB componentsRecord samples imageTraining the classifierReal world exampleDisclaimer\nWhat we're going to do\nIn a previous post about color identification with Machine learning, we used an Arduino to detect the object we were pointing at with a color sensor (TCS3200) by its color: if we detected yellow, for example, we knew we had a banana in front of us.\nOf course such a process is not object recognition at all: yellow may be a banane, or a lemon, or an apple.\nObject inference, in that case, works only if you have exactly one object for a given color.\nThe objective of this post, instead, is to investigate if we can use the MicroML framework to do simple image recognition on the images from an ESP32 camera.\nThis is much more similar to the tasks you do on your PC with CNN or any other form of NN you are comfortable with. Sure, we will still apply some restrictions to fit the problem on a microcontroller, but this is a huge step forward compared to the simple color identification.\n\nIn this context, image recognition means deciding which class (from the trained ones) the current image belongs to. This algorithm can't locate interesting objects in the image, neither detect if an object is present in the frame. It will classify the current image based on the samples recorded during training.\n\nAs any beginning machine learning project about image classification worth of respect, our task will be to distinguish an orange from an apple.\nFeatures definition\nI have to admit that I rarely use NN, so I may be wrong here, but from the examples I read online it looks to me that features engineering is not a fundamental task with NN.\nThose few times I used CNN, I always used the whole image as input, as-is. I didn't extracted any feature from them (e.g. color histogram): the CNN worked perfectly fine with raw images.\nI don't think this will work best with SVM, but in this first post we're starting as simple as possible, so we'll be using the RGB components of the image as our features. In a future post, we'll introduce additional features to try to improve our results.\nI said we're using the RGB components of the image. But not all of them.\nEven at the lowest resolution of 160x120 pixels, a raw RGB image from the camera would generate 160x120x3 = 57600 features: way too much.\nWe need to reduce this number to the bare minimum.\nHow much pixels do you think are necessary to get reasonable results in this task of classifying apples from oranges?\nYou would be surprised to know that I got 90% accuracy with an RGB image of 8x6!\n\nYes, that's all we really need to do a good enough classification.\nYou can distinguish apples from oranges on ESP32 with 8x6 pixels only!Click To Tweet\nOf course this is a tradeoff: you can't expect to achieve 99% accuracy while mantaining the model size small enough to fit on a microcontroller. 90% is an acceptable accuracy for me in this context.\nYou have to keep in mind, moreover, that the features vector size grows quadratically with the image size (if you keep the aspect ratio). A raw RGB image of 8x6 generates 144 features: an image of 16x12 generates 576 features. This was already causing random crashes on my ESP32.\nSo we'll stick to 8x6 images.\nNow, how do you compact a 160x120 image to 8x6? With downsampling.\nThis is the same tecnique we've used in the post about motion detection on ESP32: we define a block size and average all the pixels inside the block to get a single value (you can refer to that post for more details).\n\nThis time, though, we're working with RGB images instead of grayscale, so we'll repeat the exact same process 3 times, one for each channel.\nThis is the code excerpt that does the downsampling.\nuint16_t rgb_frame[HEIGHT / BLOCK_SIZE][WIDTH / BLOCK_SIZE][3] = { 0 };\n\nvoid grab_image() {\n for (size_t i = 0; i < len; i += 2) {\n // get r, g, b from the buffer\n // see later\n\n const size_t j = i / 2;\n // transform x, y in the original image to x, y in the downsampled image\n // by dividing by BLOCK_SIZE\n const uint16_t x = j % WIDTH;\n const uint16_t y = floor(j / WIDTH);\n const uint8_t block_x = floor(x / BLOCK_SIZE);\n const uint8_t block_y = floor(y / BLOCK_SIZE);\n\n // average pixels in block (accumulate)\n rgb_frame[block_y][block_x][0] += r;\n rgb_frame[block_y][block_x][1] += g;\n rgb_frame[block_y][block_x][2] += b;\n }\n}\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\nExtracting RGB components\nThe ESP32 camera can store the image in different formats (of our interest \u2014 there are a couple more available):\n\ngrayscale: no color information, just the intensity is stored. The buffer has size HEIGHT*WIDTH\nRGB565: stores each RGB pixel in two bytes, with 5 bit for red, 6 for green and 5 for blue. The buffer has size HEIGHT * WIDTH * 2\nJPEG: encodes (in hardware?) the image to jpeg. The buffer has a variable length, based on the encoding results\n\nFor our purpose, we'll use the RGB565 format and extract the 3 components from the 2 bytes with the following code.\n\nconfig.pixel_format = PIXFORMAT_RGB565;\n\nfor (size_t i = 0; i < len; i += 2) {\n const uint8_t high = buf[i];\n const uint8_t low = buf[i+1];\n const uint16_t pixel = (high << 8) | low;\n\n const uint8_t r = (pixel & 0b1111100000000000) >> 11;\n const uint8_t g = (pixel & 0b0000011111100000) >> 6;\n const uint8_t b = (pixel & 0b0000000000011111);\n}\nRecord samples image\nNow that we can grab the images from the camera, we'll need to take a few samples of each object we want to racognize.\nBefore doing so, we'll linearize the image matrix to a 1-dimensional vector, because that's what our prediction function expects.\n#define H (HEIGHT / BLOCK_SIZE)\n#define W (WIDTH / BLOCK_SIZE)\n\nvoid linearize_features() {\n size_t i = 0;\n double features[H*W*3] = {0};\n\n for (int y = 0; y < H; y++) {\n for (int x = 0; x < W; x++) {\n features[i++] = rgb_frame[y][x][0];\n features[i++] = rgb_frame[y][x][1];\n features[i++] = rgb_frame[y][x][2];\n }\n }\n\n // print to serial\n for (size_t i = 0; i < H*W*3; i++) {\n Serial.print(features[i]);\n Serial.print('\\t');\n }\n\n Serial.println();\n}\nNow you can setup your acquisition environment and take the samples: 15-20 of each object will do the job.\n\nImage acquisition is a very noisy process: even keeping the camera still, you will get fluctuating values. You need to be very accurate during this phase if you want to achieve good results. I suggest you immobilize your camera with tape to a flat surface or use some kind of photographic easel.\n\nTraining the classifier\nTo train the classifier, save the features for each object in a file, one features vector per line. Then follow the steps on how to train a ML classifier for Arduino to get the exported model.\nYou can experiment with different classifier configurations. \nMy features were well distinguishable, so I had great results (100% accuracy) with any kernel (even linear).\nOne odd thing happened with the RBF kernel: I had to use an extremely low gamma value (0.0000001). Does anyone can explain me why? I usually go with a default value of 0.001.\nThe model produced 13 support vectors.\nI did no features scaling: you could try it if classifying more than 2 classes and having poor results.\n\nReal world example\nIf you followed all the steps above, you should now have a model capable of detecting if your camera is shotting an apple or an orange, as you can see in the following video.\nhttps://eloquentarduino.github.io/wp-content/uploads/2020/01/Apple-vs-Orange.mp4\n\nThe little white object you see at the bottom of the image is the camera, taped to the desk.\nDid you think it was possible to do simple image classification on your ESP32?\nDisclaimer\nThis is not full-fledged object recognition: it can't label objects while you walk as Tensorflow can do, for example.\nYou have to carefully craft your setup and be as consistent as possible between training and inferencing.\nStill, I think this is a fun proof-of-concept that can have useful applications in simple scenarios where you can live with a fixed camera and don't want to use a full Raspberry Pi.\nIn the next weeks I settled to finally try TensorFlow Lite for Microcontrollers on my ESP32, so I'll try to do a comparison between them and this example and report my results.\nNow that you can do image classification on your ESP32, can you think of a use case you will be able to apply this code to? \nLet me know in the comments, we could even try realize it together if you need some help.\n\r\nCheck the full project code on Github\nL'articolo Apple or Orange? Image recognition with ESP32 and Arduino proviene da Eloquent Arduino Blog.", "date_published": "2020-01-12T11:32:08+01:00", "date_modified": "2020-05-31T18:51:27+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "camera", "esp32", "microml", "svm", "Arduino Machine learning", "Computer vision" ], "attachments": [ { "url": "https://eloquentarduino.github.io/wp-content/uploads/2020/01/Apple-vs-Orange.mp4", "mime_type": "video/mp4", "size_in_bytes": 1642079 } ] }, { "id": "https://eloquentarduino.github.io/?p=779", "url": "https://eloquentarduino.com/projects/esp32-arduino-motion-detection", "title": "Motion detection with ESP32 cam only (Arduino version)", "content_html": "

Do you have an ESP32 camera? Do you want to do motion detection WITHOUT ANY external hardware?

\n

Here's a tutorial made just for you: 30 lines of code and you will know when something changes in your video stream \"\ud83c\udfa5\"

\n

\"ESP32

\n

\n

** See the updated version of this project: it's easier to use and waaay faster: Easier, faster, pure video ESP32 cam motion detection **

\n

Table of contents
  1. What is (naive) motion detection?
  2. Can't I use an external PIR?
    1. External hardware
    2. Field of View
    3. Cold objects
  3. What do you need?
  4. How does it work?
    1. Downsampling
    2. Blocks difference threshold
    3. Image difference threshold
    4. Combining all together
  5. Real world example

\n

What is (naive) motion detection?

\n

Quoting from Wikipedia

\n
\n

Motion detection is the process of detecting a change in the position of an object relative to its surroundings or a change in the surroundings relative to an object

\n
\n

In this project, we're implementing what I call naive motion detection: that is, we're not focusing on a particular object and following its motion.

\n

We'll only detect if any considerable portion of the image changed from one frame to the next.

\n

We won't identify the location of motion (that's the subject for a next project), neither what caused it. We will analyze video stream in (almost) real-time and compare frame by frame: if lots of pixels changed, we'll call it motion.

\n

Can't I use an external PIR?

\n

Several projects on the internet about motion detection with an ESP32 cam use an external PIR sensor to trigger the video recording.

\n

What's the problem with that approach?

\n

1. External hardware

\n

First of all, you need external hardware. If you're using a breadboard, no problem, you just need a couple more wires and you're good to go. But I have a nice M5stick camera (no affiliate link), that's already well packaged, so it won't be that easy to add a PIR sensor.

\n

2. Field of View

\n

PIR sensors have a limited FOV (field of view), so you will need more than one to cover the whole range of the camera.

\n

My camera, for example, has fish-eye lens which give me 160\u00b0 of view. Most cheap PIR sensors have a 120\u00b0 field of view, so one will not suffice. This adds even more space to my project.

\n

3. Cold objects

\n

PIR sensors gets triggered by infrared light. Infrared light gets emitted by hot bodies (like people and animals).

\n

But motion in a video stream can happen for a variety of reasons, not necessarily due to hot bodies, for example if you want to monitor a street for cars passing by.

\n

A PIR sensor can't do this: video motion detection can.

\n

ESP32 cam pure video motion detection can detect motion due to cold objects
Click To Tweet


\n
Do you like the motion effect at the beginning of the post? Check it out on Github
\n

What do you need?

\n

All you need for this project is a board with a camera sensor. As I said, I have a M5Stick Camera with fish-eye lens, but any ESP32 based camera should work out of the box:

\n\n

\"ESP32

\n

How does it work?

\n

Ok, let's go to the "technical" stuff.

\n

Simply put, the algorithm counts the number of different pixels from one frame to the next: if many pixels changed, it will detect motion.

\n

Well, it's almost like this.

\n

Of course such an algorithm will be very sensitive to noise (which is quite high on these low-cost cameras). We need to mitigate false-positive triggers.

\n

Downsampling

\n

One super-simple and super-effective way of doing this is to work with blocks, instead of pixels. A block is simply an N x N square, whose value is the average of the pixels it contains.

\n

This greatly reduces sensitivity to noise, providing a more robust detection. Here's an example of what the the "block-ing" operation does to an image.

\n

\"Image

\n

It's really a "pixelating" effect: you take the orginal image (let's say 320x240 pixels) and resize it to 10x smaller, 32x24.

\n

This has the added benefit that it's much more lightweight to work with 32x24 matrix instead of 320x240 matrix: if you want to do real-time detection, this is a MUST.

\n

How should you choose the scale factor?

\n

Well, it depends.

\n

It depends on the sensitivity you want to achieve. The higher the downsampling, the less sensitive your detection will be.

\n

If you want to detect a person passing 50cm away from the camera, you can increase this number without any problem. If you want to detect a dog 10m away, you should keep it in the 5-10 range.

\n

Experiment with your own use case a tweak with trial-and-error.

\n

Blocks difference threshold

\n

Once we've defined the block size, we need to detect if a block changed from one frame to the next.

\n

Of course, just testing for difference (current != prev) would be again too sensitive to noise. A block can change for a variety of reasons, the first of which is the bad camera quality.

\n

So we instead define a percent threshold above which we can say for sure the block actually changed. A good starting point could be 10-20%, but again you need to tweak this to your needs.

\n

The higher the threshold, the less sensitive the algorithm will be.

\n

In code it is calculated as

\n
float delta = abs(currentBlockValue - prevBlockValue) / prevBlockValue;
\n

which indicates the relative increment/decrement from the previous value.

\n

Image difference threshold

\n

Now that we can detect if a block changed from one frame to the next, we can actually detect if the image changed.

\n

You could decide to trigger motion even if a single block changed, but I suggest you to set an higher value here.

\n

Let's return to the 320x240 image example. With a 10x10 block, you'll be working with 32x24 = 768 blocks: will you call it "motion" if 1 out of 768 blocks changed value?

\n

I don't think so. You want something more robust. You want 50 blocks to change. Or at least 20 blocks. If you do the math, 20 blocks out of 768 is only the 2.5% of change, which is hardly noticeable.

\n

If you want to be robust, don't set this threshold to a too low value. Again, tweak with real world experimenting.

\n

In code it is calculated as:

\n
float changedBlocksPercent = changedBlocks / totalBlocks
\n

Combining all together

\n

Recapping: when running the motion detection algorithm you have 3 parameters to set:

\n
    \n
  1. the block size
  2. \n
  3. the block difference threshold
  4. \n
  5. the image differerence threshold
  6. \n
\n

Let's pick 3 sensible defaults: block size = 10, block threshold = 15%, image threshold = 20%.

\n

What does these parameters translate to in the practice?

\n

They mean that motion will be detected if 20% of the image, averaged in blocks of 10x10, changed its value by at least 15% from one frame to the next.

\n

\"ESP32

\n

As you can see, you don't need high-definition images to (naively) detect if something happened to the image. Large area of motion will be easily detectable, even at very low resolution.

\n

Real world example

\n

Now the fun part. I'll show you how it performs on a real-world scenario.

\n

To keep it simple, I wrote a sketch that does only motion detection, not video streaming over HTTP.

\n

This means you won't be able to see the original image recorded from the camera. Nevertheless, I have kept the block size to a minimum to allow for the best quality possible.

\n
\n

This is me passing my arm in front of the camera a few times.

\n

The grid you see represents the actual pixels used for the computation. Each cell corresponds to one pixel of the downscaled image.

\n

The orange cells highlight the pixels that the algorithm sees as "different" from one frame to the next. As you can see, some pixels are detected even if no motion is happening. That's the noise I talked about multiple times during the post.

\n

When I move my arm in the frame, you see lots of pixels become activated, so the "Motion" text appears.

\n

While moving the arm, you may notice what I call the "ghost" effect. You actually see 2 regions of motion: one is where my arm is now, which of course changed. The other is the region where my arm was in the previous frame, which returned to its original content.

\n

This is why I suggest you keep the image difference threshold to a high value: if some real motion happens, you will notice it for sure because the activated region of the image will be actually bigger than the actual object moving.

\n

Do you like the grid effect of the sample video? Let me know in the comment if you want me to share it.

\n

Or even better: subscribe to the newsletter I you will get it directly in your inbox with my next mail.

\n\r\n
\r\n
\r\n
\r\n\t

Finding this content useful?

\r\n
\r\n\t\r\n
\r\n\t
\r\n\t\t
\r\n\t\t
\r\n\t
\r\n
\r\n
\r\n
\r\n
\r\n
\r\n\r\n\n
\r\n

Check the full project code on Github

\n

Check out also the gist for the visualization tool

\n

L'articolo Motion detection with ESP32 cam only (Arduino version) proviene da Eloquent Arduino Blog.

\n", "content_text": "Do you have an ESP32 camera? Do you want to do motion detection WITHOUT ANY external hardware?\nHere's a tutorial made just for you: 30 lines of code and you will know when something changes in your video stream \n\n\n ** See the updated version of this project: it's easier to use and waaay faster: Easier, faster, pure video ESP32 cam motion detection **\nTable of contentsWhat is (naive) motion detection?Can't I use an external PIR?External hardwareField of ViewCold objectsWhat do you need?How does it work?DownsamplingBlocks difference thresholdImage difference thresholdCombining all togetherReal world example\nWhat is (naive) motion detection?\nQuoting from Wikipedia\n\nMotion detection is the process of detecting a change in the position of an object relative to its surroundings or a change in the surroundings relative to an object\n\nIn this project, we're implementing what I call naive motion detection: that is, we're not focusing on a particular object and following its motion.\nWe'll only detect if any considerable portion of the image changed from one frame to the next.\nWe won't identify the location of motion (that's the subject for a next project), neither what caused it. We will analyze video stream in (almost) real-time and compare frame by frame: if lots of pixels changed, we'll call it motion.\nCan't I use an external PIR?\nSeveral projects on the internet about motion detection with an ESP32 cam use an external PIR sensor to trigger the video recording.\nWhat's the problem with that approach? \n1. External hardware\nFirst of all, you need external hardware. If you're using a breadboard, no problem, you just need a couple more wires and you're good to go. But I have a nice M5stick camera (no affiliate link), that's already well packaged, so it won't be that easy to add a PIR sensor.\n2. Field of View\nPIR sensors have a limited FOV (field of view), so you will need more than one to cover the whole range of the camera. \nMy camera, for example, has fish-eye lens which give me 160\u00b0 of view. Most cheap PIR sensors have a 120\u00b0 field of view, so one will not suffice. This adds even more space to my project.\n3. Cold objects\nPIR sensors gets triggered by infrared light. Infrared light gets emitted by hot bodies (like people and animals).\nBut motion in a video stream can happen for a variety of reasons, not necessarily due to hot bodies, for example if you want to monitor a street for cars passing by.\nA PIR sensor can't do this: video motion detection can.\nESP32 cam pure video motion detection can detect motion due to cold objectsClick To Tweet\n Do you like the motion effect at the beginning of the post? Check it out on Github\nWhat do you need?\nAll you need for this project is a board with a camera sensor. As I said, I have a M5Stick Camera with fish-eye lens, but any ESP32 based camera should work out of the box:\n\nESP32 cam\nESP32 eye\nTTGO camera\n... any other flavor of ESP32 camera\n\n\nHow does it work?\nOk, let's go to the "technical" stuff.\nSimply put, the algorithm counts the number of different pixels from one frame to the next: if many pixels changed, it will detect motion.\nWell, it's almost like this.\nOf course such an algorithm will be very sensitive to noise (which is quite high on these low-cost cameras). We need to mitigate false-positive triggers.\nDownsampling\nOne super-simple and super-effective way of doing this is to work with blocks, instead of pixels. A block is simply an N x N square, whose value is the average of the pixels it contains.\nThis greatly reduces sensitivity to noise, providing a more robust detection. Here's an example of what the the "block-ing" operation does to an image.\n\nIt's really a "pixelating" effect: you take the orginal image (let's say 320x240 pixels) and resize it to 10x smaller, 32x24. \nThis has the added benefit that it's much more lightweight to work with 32x24 matrix instead of 320x240 matrix: if you want to do real-time detection, this is a MUST.\nHow should you choose the scale factor?\nWell, it depends.\nIt depends on the sensitivity you want to achieve. The higher the downsampling, the less sensitive your detection will be. \nIf you want to detect a person passing 50cm away from the camera, you can increase this number without any problem. If you want to detect a dog 10m away, you should keep it in the 5-10 range.\nExperiment with your own use case a tweak with trial-and-error.\nBlocks difference threshold\nOnce we've defined the block size, we need to detect if a block changed from one frame to the next.\nOf course, just testing for difference (current != prev) would be again too sensitive to noise. A block can change for a variety of reasons, the first of which is the bad camera quality.\nSo we instead define a percent threshold above which we can say for sure the block actually changed. A good starting point could be 10-20%, but again you need to tweak this to your needs.\nThe higher the threshold, the less sensitive the algorithm will be.\nIn code it is calculated as\nfloat delta = abs(currentBlockValue - prevBlockValue) / prevBlockValue;\nwhich indicates the relative increment/decrement from the previous value.\nImage difference threshold\nNow that we can detect if a block changed from one frame to the next, we can actually detect if the image changed.\nYou could decide to trigger motion even if a single block changed, but I suggest you to set an higher value here.\nLet's return to the 320x240 image example. With a 10x10 block, you'll be working with 32x24 = 768 blocks: will you call it "motion" if 1 out of 768 blocks changed value?\nI don't think so. You want something more robust. You want 50 blocks to change. Or at least 20 blocks. If you do the math, 20 blocks out of 768 is only the 2.5% of change, which is hardly noticeable.\nIf you want to be robust, don't set this threshold to a too low value. Again, tweak with real world experimenting.\nIn code it is calculated as:\nfloat changedBlocksPercent = changedBlocks / totalBlocks\nCombining all together\nRecapping: when running the motion detection algorithm you have 3 parameters to set:\n\nthe block size\nthe block difference threshold\nthe image differerence threshold\n\nLet's pick 3 sensible defaults: block size = 10, block threshold = 15%, image threshold = 20%.\nWhat does these parameters translate to in the practice?\nThey mean that motion will be detected if 20% of the image, averaged in blocks of 10x10, changed its value by at least 15% from one frame to the next.\n\nAs you can see, you don't need high-definition images to (naively) detect if something happened to the image. Large area of motion will be easily detectable, even at very low resolution.\nReal world example\nNow the fun part. I'll show you how it performs on a real-world scenario.\nTo keep it simple, I wrote a sketch that does only motion detection, not video streaming over HTTP. \nThis means you won't be able to see the original image recorded from the camera. Nevertheless, I have kept the block size to a minimum to allow for the best quality possible.\nhttps://eloquentarduino.github.io/wp-content/uploads/2020/01/ESP32-camera-motion-detection-example.mp4\nThis is me passing my arm in front of the camera a few times.\nThe grid you see represents the actual pixels used for the computation. Each cell corresponds to one pixel of the downscaled image.\nThe orange cells highlight the pixels that the algorithm sees as "different" from one frame to the next. As you can see, some pixels are detected even if no motion is happening. That's the noise I talked about multiple times during the post.\nWhen I move my arm in the frame, you see lots of pixels become activated, so the "Motion" text appears. \nWhile moving the arm, you may notice what I call the "ghost" effect. You actually see 2 regions of motion: one is where my arm is now, which of course changed. The other is the region where my arm was in the previous frame, which returned to its original content.\nThis is why I suggest you keep the image difference threshold to a high value: if some real motion happens, you will notice it for sure because the activated region of the image will be actually bigger than the actual object moving.\nDo you like the grid effect of the sample video? Let me know in the comment if you want me to share it.\nOr even better: subscribe to the newsletter I you will get it directly in your inbox with my next mail.\n\r\n\r\n\r\n \r\n\tFinding this content useful?\r\n\r\n\t\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t \r\n \r\n \r\n \r\n\r\n\r\n\r\n\n\r\nCheck the full project code on Github\nCheck out also the gist for the visualization tool\nL'articolo Motion detection with ESP32 cam only (Arduino version) proviene da Eloquent Arduino Blog.", "date_published": "2020-01-05T12:08:08+01:00", "date_modified": "2020-06-03T13:17:09+02:00", "authors": [ { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" } ], "author": { "name": "simone", "url": "https://eloquentarduino.github.io/author/simone/", "avatar": "http://1.gravatar.com/avatar/d670eb91ca3b1135f213ffad83cb8de4?s=512&d=mm&r=g" }, "tags": [ "camera", "esp32", "Computer vision" ], "attachments": [ { "url": "https://eloquentarduino.github.io/wp-content/uploads/2020/01/ESP32-camera-motion-detection-example.mp4", "mime_type": "video/mp4", "size_in_bytes": 1673368 } ] } ] }