Camera traps have been a key part of the conservation toolkit for decades. Remotely triggered video or still cameras allow researchers and managers to monitor cryptic species, survey populations, and support enforcement responses by documenting illegal activities. Increasingly, machine learning is being implemented to automate the processing of data generated by camera traps.
A recent study published showed that, despite being well-established and widely used tools in conservation, progress in the development of camera traps has plateaued since the emergence of the modern model in the mid-2000s, leaving users struggling with many of the same issues they faced a decade ago. That manufacturer ratings have not improved over time, despite technological advancements, demonstrates the need for a new generation of innovative conservation camera traps. Join this group and explore existing efforts, established needs, and what next-generation camera traps might look like - including the integration of AI for data processing through initiatives like Wildlife Insights and Wild Me.
Group Highlights:
Our past Tech Tutors seasons featured multiple episodes for experienced and new camera trappers. How Do I Repair My Camera Traps? featured WILDLABS members Laure Joanny, Alistair Stewart, and Rob Appleby and featured many troubleshooting and DIY resources for common issues.
For camera trap users looking to incorporate machine learning into the data analysis process, Sara Beery's How do I get started using machine learning for my camera traps? is an incredible resource discussing the user-friendly tool MegaDetector.
And for those who are new to camera trapping, Marcella Kelly's How do I choose the right camera trap(s) based on interests, goals, and species? will help you make important decisions based on factors like species, environment, power, durability, and more.
Finally, for an in-depth conversation on camera trap hardware and software, check out the Camera Traps Virtual Meetup featuring Sara Beery, Roland Kays, and Sam Seccombe.
And while you're here, be sure to stop by the camera trap community's collaborative troubleshooting data bank, where we're compiling common problems with the goal of creating a consistent place to exchange tips and tricks!
Header photo: ACEAA-Conservacion Amazonica
St. Lawrence University
Professor of Biology at St. Lawrence University
- 0 Resources
- 1 Discussions
- 13 Groups
Technologist & Wildlife Photographer at Google
- 0 Resources
- 1 Discussions
- 3 Groups
Danau Girang Field Center & Cardiff University
Conservation biologist and PhD student specialising in movement ecology and behavioural research on Sunda pangolins in Malaysia Borneo. Using camera traps, biologging, and conservation social science.
- 0 Resources
- 0 Discussions
- 19 Groups
- 0 Resources
- 0 Discussions
- 1 Groups
VerdantLearn
E-learning specialist with a conservation background
- 0 Resources
- 2 Discussions
- 4 Groups
Wild Me
Software Engineer for Wildlife Conservation
- 0 Resources
- 4 Discussions
- 11 Groups
National Geographic Society
- 0 Resources
- 11 Discussions
- 3 Groups
World Wide Fund for Nature (WWF)
- 0 Resources
- 2 Discussions
- 4 Groups
Purdue University
- 4 Resources
- 36 Discussions
- 5 Groups
Tithonus Wildlife Research
Research ecologist; studying predator-prey dynamics of large mammals
- 0 Resources
- 2 Discussions
- 1 Groups
- 0 Resources
- 0 Discussions
- 1 Groups
Pepperwood Preserve
- 0 Resources
- 4 Discussions
- 1 Groups
The 2021 #Tech4Wildlife Photo Challenge: Community Highlights
25 March 2021 at 12:00am
Kaggle Competition: iWildcam 2021 - FGVC8

12 March 2021 at 12:00am
Funding Opportunity: COVID-19 Science Fund

10 March 2021 at 12:00am
Seeing #Tech4Wildlife With Unseen Empire

8 March 2021 at 12:00am
Resource: WildID

8 March 2021 at 12:00am
Virtual Event: Carnivores and Camera Traps
4 March 2021 at 12:00am
Collaboration Spotlight: BoomBox

26 February 2021 at 12:00am
MegaDetector on Edge Devices ??
19 February 2021 at 12:30am
15 April 2021 at 08:16am
To follow up with results from my testing, you can run MegaDetector on a Pi, if you're not in a hurry. I followed the instructions on GitHub for running on Linux and the installation of Python packages went smoothly. On a Pi4 with 8GB RAM it took just over 2 min per image (using 3 megapixel images from Reconyx cameras). So if you're capturing less than 700 images per day then the Pi could keep up. It won't keep up with real-time captures though, particularly if you get bursts of images. Even high-end GPU's struggle with processing more than an image per second. It could be quite a useful way to reduce the processing burden at the end of a camera trapping session, or to trigger another event, such as sending only images with animals via email/telemetry etc.
Wide angle camera trap
17 February 2021 at 03:51pm
19 March 2021 at 02:40pm
Important considerations raised by Peter!
In my case it was not important though, as I was using the trap camera in timelapse mode.
19 March 2021 at 03:42pm
I agree the PIR sensor and the camera will be "seeing" different pictures, but I believe that is exactly the effect that is sought: now too much of the elephant is out of the frame when the camera is triggered, and the wide angle lens is desired so that more of the elephant would fit in the frame at that same triggering point.
19 March 2021 at 06:51pm
In the TrailCamPro link above (comparing FOV and Detection Angle), I see a few "panorama" camera models. For example, the Moultrie 150i or Moultrie 180 (https://www.moultriefeeders.com/catalogsearch/result/?q=panoramic) - although they are all listed as discontinued.
It seems this might be a solution for Daniela's scenario.
I'm also interested because it could offer more forgiving setup (if the subject does not travel exactly where expected.)
Has anyone here worked with a panoramic camera? What did you find to be their pros/cons?
Recommendations needed: Rechargeable batteries for camera traps
16 February 2021 at 04:15pm
9 April 2021 at 03:55pm
I have only used Tenergy NiMH rechargeables, these put out 1.2 V. We've used it on Bushnell Trophycam, which have 2x4 battery sets, i.e. you only need 4 batteries for it to have enough voltage, the additional batteries don't increase the voltage, they only prolong working time. So 4x1.2=4.8 V instead of the expected 6 V from alkaline batteries. I would think that Cuddeback has a similar circuit setup, so an external 12 V battery might actually be too much, as some have already pointed out. With regular NiMH batteries what's most important is to get batteries with high mAh. The ones sold commercially usually have very low mAh, so they won't last very long. We have 2400 mAh, I think, and it works reasonably well, they can last for about a month in the field, IR flash works.
9 April 2021 at 04:20pm
Hello Akiba
That sounds good!
Do you have a picture of the complete setup?
Bests
Juan
9 April 2021 at 06:10pm
While the manufacturer might claim that the camera requires 8xAA at 1.5v each, most likely it will work just fine with NiMH batteries that have nominal voltage of 1.2V.
I have used eneloops with Reconnyx cameras for a long time, as well as with handheld GPSes and a myriad of electronic devices and not once run into trouble because of the lower voltage. Your camera should have a setting in the menu to select NiMH batteries, that will prevent it from shutting down too soon.
I suggest you do your own testing - run 2 cameras one next to another on a 1.5V alkaline battery and a 1.2V NiMH rechargeable one until it switches off and check the voltage on the "empty" cell.
Your issue will likely not be the voltage of the cell, but the current the battery can deliver, as it has to recharge a capacitor in the incadescent flash light. I see that the manufacturer declares up to 20s recharge time for night photos, which is a lot. That is a downside of a colour camera.
I don't know what the best source is for batteries in SA, but if possible, get rechargeable batteries from IKEA (Ladda NiMH batteries). They are rebranded eneloops pro for around 30% less and I am yet to find a better battery for a camera trap. Otherwise, like mentioned before, eneloop pro will be hard to beat for performance and reliability.
Best Camera Trap Models Database: Input Needed
9 February 2021 at 08:39pm
9 April 2022 at 11:02am
Gracias, Joaquín, for the useful Coto Doñana camera evaluation information in https://zslpublications.onlinelibrary.wiley.com/doi/full/10.1111/jzo.12945 . This initiative will presumably eventually appear in the European Observatory of Wildlife map. Currently it only displays three entries for Iberia two of which, disappointingly, appear to serve as guides to where "Big Animals" can be killed.
9 April 2022 at 11:46am
Many thanks, "mactadpole" for the promising remarks concerning the Browning Dark Ops Pro XD dual-lens BTC-6PXD:
"...we are extremely pleased with the BTC-6PXD. We went with these because they only use 6 aa batteries and they were smaller/lighter than the BTC-8A."
Given the similarity between the western Ecuador conditions you describe and those we face in Costa Rica the Browning - 180$ from Amazon where 37 reviews are predominantly favourable - sounds like the camera for us. Your 12.2.2021 report is now over a year old, however. Please, has anything changed since then? Any other candidate we should consider?
5 August 2022 at 02:00pm
Hi Shawn,
I am looking into camera traps to use for an arboreal project in Panama, I am really interested in your experience of mounting camera traps up trees. The photo shows an interesting mount, did you make it yourselves?
How were the seals on the Brownings? I have been tempted to go for reconyx cause they have really good o-ring seals but they may just be too pricy so looking for a reliable alternative.
Anything you can share will be useful.
Cheers
Lucy
I made an open-source tool to help you sort camera trap images
8 February 2021 at 05:01pm
13 March 2021 at 02:58pm
Hi seems great and easy to use! Just a question, can the software success to identify the species or "only" categorize animal/ vehicle/human? Can we "trained" the software to detect a specific species?
thank you
13 March 2021 at 05:54pm
Right now the only classifications are animal/vehicle/person/empty. It cannot discern between different species.
There is no support for training at the moment -- I am envisioning something down the line but I wouldn't say that's coming any time soon.
Hope that helps!
29 March 2021 at 09:02pm
Hi
I just tried and works great, I will include it in my workflow.
Thanks for your work!
Juan
Protecting Wildlife with Machine Learning

29 January 2021 at 12:00am
23 February 2021 at 03:43pm
Thanks for your interest in MegaDetector! You're right that it's not practical to run MegaDetector on edge devices; its architecture is chosen to prioritize accuracy over speed, so it likes a big GPU. Well, more accurately... one *can* run anything on anything, but you will pay such a price in hassle and time that it's almost certainly not worth it.
Moreover, if we made a "light" version of MegaDetector (or any heavyweight model), it would still be too heavy for some environments, and too light (i.e., insufficiently accurate) for others. And you would still be spending lots of the model's bandwidth on animals and ecosystems that may not be relevant for you.
So... a more common approach among MD users who want to run edge models has been to take some sample unlabeled data from your specific ecosystem, or similar data that's publicly available (there's a lot of camera trap data at http://lila.science, for example), run that data through MegaDetector, and use the resulting boxes to train a model that fits your specific compute budget and your a framework that's easy to use in your environment (sometimes TFLite, often YOLO). This is an inelegant but very effective form of model compression, and it has the benefit of only asking your small model to deal with images that are relevant to your project (as opposed to MegaDetector, which uses up model bandwidth to deal with all kinds of ecosystems you may never see).
Hope that helps... of course if someone wants to take on the task of building a *family* of compressed MegaDetectors to provide a more off-the-shelf approach, we'd like to see that happen too!
-Dan