discussion / Camera Traps  / 19 February 2021

MegaDetector on Edge Devices ??

Hi all.  I'm a relatively new member of the community and have been trying to consume the many excellent videos, discussions and resources before asking questions.  Hopefully I haven't missed this one somewhere along the way....

I have used off the shelf camera traps for many years and am kicking around some ideas in hopes of creating a better solution.  I am in my early days of learning ML and the many associated technologies - and it's been many years since I worked with microcontrollers. But, hey, what's the point of life without a few hurdles to overcome, right!? :)

I'm trying to understand if it is possible to run MegaDetector on an edge device like a Raspberry Pi to perform the detection on 1 to 10 images at a time (not large quantities). All the info I'm finding online seems to be focused on processing large images collections and therefore assumes use of Azure or AWS compute resources.

Is this because it's not feasible/practical to run MegaDetector on the device?

BTW, please point me to another thread if this has already been discussed.

Thank you!
Rob




Thanks for your interest in MegaDetector!  You're right that it's not practical to run MegaDetector on edge devices; its architecture is chosen to prioritize accuracy over speed, so it likes a big GPU.  Well, more accurately... one *can* run anything on anything, but you will pay such a price in hassle and time that it's almost certainly not worth it.

Moreover, if we made a "light" version of MegaDetector (or any heavyweight model),  it would still be too heavy for some environments, and too light (i.e., insufficiently accurate) for others.  And you would still be spending lots of the model's bandwidth on animals and ecosystems that may not be relevant for you.

So... a more common approach among MD users who want to run edge models has been to take some sample unlabeled data from your specific ecosystem, or similar data that's publicly available (there's a lot of camera trap data at http://lila.science, for example), run that data through MegaDetector, and use the resulting boxes to train a model that fits your specific compute budget and your a framework that's easy to use in your environment (sometimes TFLite, often YOLO).  This is an inelegant but very effective form of model compression, and it has the benefit of only asking your small model to deal with images that are relevant to your project (as opposed to MegaDetector, which uses up model bandwidth to deal with all kinds of ecosystems you may never see).

Hope that helps... of course if someone wants to take on the task of building a *family* of compressed MegaDetectors to provide a more off-the-shelf approach, we'd like to see that happen too!

-Dan

To follow up with results from my testing, you can run MegaDetector on a Pi, if you're not in a hurry. I followed the instructions on GitHub for running on Linux and the installation of Python packages went smoothly. On a Pi4 with 8GB RAM it took just over 2 min per image (using 3 megapixel images from Reconyx cameras). So if you're capturing less than 700 images per day then the Pi could keep up. It won't keep up with real-time captures though, particularly if you get bursts of images. Even high-end GPU's struggle with processing more than an image per second. It could be quite a useful way to reduce the processing burden at the end of a camera trapping session, or to trigger another event, such as sending only images with animals via email/telemetry etc.

Hi Rob,

I struggled with getting MegaDetector v4 (MDv4) working on edge devices in a usable way until MegaDetector v5 (released June '22).  The largest problem for me with MDv4 was the large RAM requirements for just initializing the pre-trained MDv4 model.

With MegaDetector v5, they switched the underlying architecture to YOLOv5 and that made all the difference.   I am now able to run MegaDetector on an Nvidia Jetson Nano with 4GB RAM.   It is usable for our purposes now, with inferences taking about 1 image/sec.  

I'm using a Jetson Nano 4GB on a developer kit carrier board (version B01).   It took a little playing around to get the GPU working properly with PyTorch to run MDv5, but not too bad.   I am going to continue to try to optimize things, but it is now workable.  Switching to YOLOv5 was a great move.   I suspect it will work okay on a Raspberry Pi too, although maybe a bit slower due to lesser GPU.

Happy to share my experiences or fill in any other gaps.

Interesting thread regarding the new inference speed with MDv5. At the moment I know there's an experiment exploring live inference from a video stream, but overlaying the bounding box and inference / species ID as a second "slower" layer (i.e via OBS). The output would be a smooth 30fps stream with a 1fps layer / bounding box generated and updated in real-time for example, mixed together into one stream. This of cause will all be running on a powerful desktop with plenty of ram and a high end GPU. 

For edge devices you may want to look as Edge Impulse and compare against YOLOv5.