Hi wildlabbers,
I have another colleague looking for support for getting started using AI for processing videos coming out of their camera traps - specifically for species ID of individual jaguar as a start (they have training data). Is anyone or any of the existing platforms doing this? I can't tell from dan's giant list of ML platforms which ones (if any?) support video!
Thanks for your help!
Steph
27 November 2023 11:42am
Hi!
There was a very related and very recent thread on video from camera traps here:
Video camera trap analysis help | WILDLABS
Hello,I'm a complete newbie so any help would be appreciated. I never trained any ML models (have junior/mid experience with Python and R) nor annotated the data but would like to learn. We have set couple of camera traps that record videos on vulture feeding sites and I am tasked to analyze the video data for their presence.I thought that using Megadetector could work but it seems to me it only takes in images so I don't know where to start. What would you use, is there a pipeline (or articles/repositories/video) about how best to approach the task? Thanks in advance.
wildlabsnetIt is always possible to split videos into frames using ffmpeg (or https://ffmpeg-batch.sourceforge.io) or similar and then feed the to the ML workflow. I am pretty sure ffmpeg can do something clever with meta data and filenames so metadata follows the files.
Cheers,
27 November 2023 1:58pm
Zamba Cloud supports video!
30 November 2023 8:45am
I can help you with that, please send us some of the videos and will build a pipeline for it
aakash at thinkevolvelabs dot com
- Aakash Gupta
1 December 2023 8:40pm
ffmpeg + TrapTragger (Camtraps + re-ID) and Whiskerbook(re-ID)?
TrapTagger About
Camera traps are a powerful tool for studying wildlife and are widely used by ecologists around the world. However, these motion-activated cameras can generate immense amounts of data, of which a large proportion is simply empty images generated by the wind and other disturbances. Traditionally, these thousands of images were manually sorted by the ecologists themselves in an error-prone and extremely time-consuming process – often taking longer than the camera-trap survey itself. This presented a massive obstacle to camera-trap-focused research. As such, we wanted to develop a way to make the process drastically shorter, and more accurate. In turn this would allow ecologists to run more camera trap surveys more often, resulting in more complete data, and more statistically robust results across greater and greater areas. Moreover, this would allow ecologists to spend their new-found time on the analysis of the data itself, and make the world a better place.
WildEye
Lars Holst Hansen
Aarhus University