Camera traps have been a key part of the conservation toolkit for decades. Remotely triggered video or still cameras allow researchers and managers to monitor cryptic species, survey populations, and support enforcement responses by documenting illegal activities. Increasingly, machine learning is being implemented to automate the processing of data generated by camera traps.
A recent study published showed that, despite being well-established and widely used tools in conservation, progress in the development of camera traps has plateaued since the emergence of the modern model in the mid-2000s, leaving users struggling with many of the same issues they faced a decade ago. That manufacturer ratings have not improved over time, despite technological advancements, demonstrates the need for a new generation of innovative conservation camera traps. Join this group and explore existing efforts, established needs, and what next-generation camera traps might look like - including the integration of AI for data processing through initiatives like Wildlife Insights and Wild Me.
Group Highlights:
Our past Tech Tutors seasons featured multiple episodes for experienced and new camera trappers. How Do I Repair My Camera Traps? featured WILDLABS members Laure Joanny, Alistair Stewart, and Rob Appleby and featured many troubleshooting and DIY resources for common issues.
For camera trap users looking to incorporate machine learning into the data analysis process, Sara Beery's How do I get started using machine learning for my camera traps? is an incredible resource discussing the user-friendly tool MegaDetector.
And for those who are new to camera trapping, Marcella Kelly's How do I choose the right camera trap(s) based on interests, goals, and species? will help you make important decisions based on factors like species, environment, power, durability, and more.
Finally, for an in-depth conversation on camera trap hardware and software, check out the Camera Traps Virtual Meetup featuring Sara Beery, Roland Kays, and Sam Seccombe.
And while you're here, be sure to stop by the camera trap community's collaborative troubleshooting data bank, where we're compiling common problems with the goal of creating a consistent place to exchange tips and tricks!
Header photo: ACEAA-Conservacion Amazonica
Smart Parks
Founder of Smart Parks - www.smartparks.org / Founder of OpenCollar - https://opencollar.io
- 0 Resources
- 13 Discussions
- 5 Groups
- 0 Resources
- 7 Discussions
- 18 Groups
Royal Society for the Protection of Birds (RSPB)
- 1 Resources
- 18 Discussions
- 4 Groups
- @LucyD
- | She/Her
Software developer and wildlife ecologist
- 0 Resources
- 2 Discussions
- 5 Groups
- @cleo
- | She/Her
Southern African Wildlife College
I am ecologist working in African conservation areas who loves wildlife & wild landscapes. I increasingly recognize that conservation is about people, especially those living in & around protected areas. Finding ways to benefit marginalized people is my passion.
- 0 Resources
- 0 Discussions
- 14 Groups
Terrestrial Ecologist
- 0 Resources
- 1 Discussions
- 4 Groups
WILDLABS & Fauna & Flora
Project Officer, WILDLABS
- 12 Resources
- 13 Discussions
- 6 Groups
Addax Data Science
Wildlife ecologist with a special interest in machine learning
- 0 Resources
- 8 Discussions
- 2 Groups
- @eliminatha
- | she
Passionate wildlife researcher dedicated to uncovering the secrets of the natural world via the lens of camera traps. With a sharp eye for detail and a strong commitment to wildlife conservation.
- 0 Resources
- 0 Discussions
- 4 Groups
I help conservation scientists spend less time on boring stuff.
- 0 Resources
- 13 Discussions
- 6 Groups
- @jcbotsch
- | he/him
I'm a population and community ecologist studying the effects of global change on insect populations.
- 0 Resources
- 0 Discussions
- 6 Groups
A wildlife ranger with over 05 years in active duty and 03 years as an active EarthRanger user down in Murchison Falls National Park , Uganda.
- 0 Resources
- 0 Discussions
- 1 Groups
Article
Read our interview with Clementine Uwamahoro, African Parks’ Country Manager in Conservation Technology overlooking technology operations for both Akagera National Park and Nyungwe National Park.
29 November 2023
TagRanger® is a state-of-the-art wildlife finding, monitoring and tracking solution for research, conservation and environmental professionals. With superior configurability for logging data, reporting location and...
23 November 2023
Lisanne Petracca is hiring TWO technicians within the SPEC Lab Ocelot Research Program to start January 17, 2024.
16 November 2023
With the rising threats to biodiversity such as wildlife crime, climate change and human-wildlife conflict today, wildlife monitoring technologies have become vital to study movement ecology, behaviour patterns, changes...
25 October 2023
WCS is seeking a Conservation Technology Specialist to join their work in the Okapi Wildlife Reserve.
11 August 2023
Please join us in celebrating this year’s top #Tech4Wildlife Photo Challenge Honorees as chosen by our panel of leading conservation organization judges, and enjoy the story contained within these entries about how our...
4 August 2023
Join us as we count down the WILDLABS community's honorees in the first-ever #Tech4Wildlife Community Choice Awards!
3 August 2023
Exciting opportunity for an experienced biodiversity monitoring expert in ZSL's conservation department
18 July 2023
Article
In 2019, the U.S. Navy initiated a time-lapse camera study to investigate seal presence at select haul-out locations in the lower Chesapeake Bay and coastal waters of Virginia, which are important areas to Navy training...
13 July 2023
Apply for funding (£500,000-£750,000) to develop software systems, which will help to improve biodiversity monitoring by automating the analysis of images and videos
12 July 2023
Applications are now open till 23 July for the 2023 Canon Oceania Grants with the category of the Environmental Grant valued at AU$5,000. The finalist will be selected based on the environmental and social merits of...
7 July 2023
Article
At Appsilon, we are always working to enable our users to get the most out of our solutions. With this in mind, we are happy to introduce two new add-ons to Mbaza AI.
4 July 2023
December 2023
event
November 2023
event
Description | Activity | Replies | Groups | Updated |
---|---|---|---|---|
It's certainly possible that your camera(s) just aren't embedded structured metadata for this information. But I wouldn't give up too quickly on looking for it, because... |
|
Camera Traps, Data management and processing tools, Open Source Solutions, Software and Mobile Apps | 1 hour 59 minutes ago | |
Zamba Cloud can handle video! |
|
Camera Traps | 1 day 3 hours ago | |
Literally HOW did I miss that! Thanks for flagging! |
|
AI for Conservation, Camera Traps | 1 day 15 hours ago | |
Thank you for the links, Robin. |
+7
|
Camera Traps | 1 week 2 days ago | |
The two cameras you mention below tick off most of the items in your requirements list. I think the exception is the “timed start” whereby the camera would “wake up” to arm... |
|
Camera Traps | 1 week 3 days ago | |
Hi Ben,I would be interested to see if the Instant Detect 2.0 camera system might be useful for this.The cameras can transmit thumbnails of the captured images using LoRa radio to... |
+4
|
Camera Traps | 1 week 5 days ago | |
Hello Sam,What would you say would be the estimate cost was for the first version Instant Detect 1.0 ? That might help my research ? |
|
AI for Conservation, Camera Traps, Human-Wildlife Conflict, Sensors | 1 week 6 days ago | |
Hi @GermanFore ,I work with the BearID Project on individual identification of brown bears from faces. More recently we worked on face detection across all bear species and ran... |
|
AI for Conservation, Camera Traps, Data management and processing tools, Software and Mobile Apps | 3 weeks 4 days ago | |
Hi Jay! Thanks for posting this here as well as your great presentation in the Variety Hour the other day!Cheers! |
|
Camera Traps | 1 month ago | |
For anyone interested: the GBIF guide Best Practices for Managing and Publishing Camera Trap Data is still open for review and feedback until next week. More info can be found in... |
|
Autonomous Camera Traps for Insects, Camera Traps | 1 month ago | |
Hi Maddie,This camera has a very quick reaction time. |
|
Camera Traps | 1 month 1 week ago | |
We have the the FLIR FC series (FC 618) thermal cameras setup. with regards to the connection between the cameras and monitoring station, this through fiber cable and microwave... |
|
Human-Wildlife Conflict, Camera Traps, Sensors | 1 month 3 weeks ago |
Paving the Way for Women: LoRaWAN Technology in Akagera National Park with Clementine Uwamahoro

29 November 2023 5:22pm
Automatic extraction of temperature/moon phase from camera trap video
29 November 2023 1:15pm
29 November 2023 4:28pm
It's certainly possible that your camera(s) just aren't embedded structured metadata for this information. But I wouldn't give up too quickly on looking for it, because extracting it from actual video frames is more challenging, less efficient, and less reliable.
EXIF / IPTC metadata is effectively limited to a finite set of pre-defined attributes, mostly centred around camera settings & author information, respectively. You can check the latest specification to see precisely what is formally defined. Ambient temperature actually is in there (and has been for a while) but there's nothing for astronomical data (although, can you not infer the moon phase based on the location & time data?).
It is alas a long-running issue that camera makers often misunderstand the specification and store data incorrectly (or not at all).
Camera makers can embed proprietary data in MakerNote fields, but the format is unspecified and standard tools (like exiftool) likely won't understand them for all but the most popular cameras (think Nikon, Canon, Sony, etc). And possibly only for older models.
Many common image & video containers (e.g. JPEG and MP4) do support embeded XMP, an extensible format which allows model- and make-specific metadata. It's possible that's in use in your case. exiftool does not show embedded XMP data by default - at least not reliably. In a nutshell, I suggest running exiftool as:
exiftool -D -g -G -ee -scanForXMP -u -U <PATH>
That's more likely to show you embedded XMP data, as well as all EXIF / IPTC tags (even those unrecognised by exiftool - you can at least get their raw bytes and work from there). You can also add the `-v3` argument if you want to dig even harder, or see the raw data in more detail.
Another way some cameras store proprietary metadata is in proprietary tracks (e.g. GPMF for GoPros). You can see if your files contain any such tracks via:
ffmpeg -i <PATH>
Usually these are used for metadata which can change during a recording, as EXIF / IPTC / XMP have no real support for that. But some camera makers use them even for static metadata out of e.g. laziness or hubris.
Extracting such tracks is easy enough with tools like ffmpeg, but interpreting them can be challenging if their format is not documented like GoPro's GPMF.
Can you provide a sample file for us to examine?
Video camera trap analysis help
21 November 2023 7:49am
23 November 2023 11:26am
Hey there :)
You could use Timelapse (for data annotation and automatic metadata extraction, e.g. date/time, temperature, etc) in combination with Megadetector (it works with videos too). You basically run the Megadetector through your batch of videos and then use the output json file in Timelapse (to filter out empty videos or automate the annotation process).
Here's the documentation for Timelapse:
https://saul.cpsc.ucalgary.ca/timelapse/pmwiki.php?n=Main.HomePage
And here you can find detailed info on how to integrate the Megadetector input into Timelapse.
https://saul.cpsc.ucalgary.ca/timelapse/uploads/Guides/TimelapseImageRecognitionGuide.pdf
I hope this is useful.
Cheers,
Lucy
23 November 2023 4:55pm
You are correct that at its core, MegaDetector only knows about still images. However, we have found that it's quite reasonable to run MegaDetector on something like every 10th frame of a video (that's still typically three frames per second) to determine whether animals are present. I would say video represents around 10% of total MegaDetector use right now. If you have very small animals that flash in and out of frame quickly, this probably isn't a good approach (this comes up a lot with video surveys for birds), but for most animals, this has worked great.
As Lucy highlights, most MegaDetector users load their MegaDetector results into Timelapse to review their images; Timelapse lets you say, for example, "show me only the images that MegaDetector thinks are not empty". It can do the same for videos. But it's not necessary to use Timelapse; some users, for example, review their videos in dedicated behavioral coding software and just want to get blanks out of the way first. MegaDetector can help with that.
All that said, it's a little bit more involved to process videos than to process stills, so we recommend you reach out to us at [email protected], and we can run a few test videos, make sure things are working OK, then walk you through the process. For the adventerous, the relevant script is here, and the notebook we use to do this for batch jobs is here, but for new users, we typically recommend starting by email.
And just for fun, here is a video of MegaDetector running on a video of a raccoon. Because raccoons.
28 November 2023 2:33pm
Zamba Cloud can handle video!
Is anyone or platform supporting ML for camera trap video processing (id-ing jaguar)?
27 November 2023 10:49am
27 November 2023 11:42am
Hi!
There was a very related and very recent thread on video from camera traps here:

Video camera trap analysis help | WILDLABS
Hello,I'm a complete newbie so any help would be appreciated. I never trained any ML models (have junior/mid experience with Python and R) nor annotated the data but would like to learn. We have set couple of camera traps that record videos on vulture feeding sites and I am tasked to analyze the video data for their presence.I thought that using Megadetector could work but it seems to me it only takes in images so I don't know where to start. What would you use, is there a pipeline (or articles/repositories/video) about how best to approach the task? Thanks in advance.
It is always possible to split videos into frames using ffmpeg (or https://ffmpeg-batch.sourceforge.io) or similar and then feed the to the ML workflow. I am pretty sure ffmpeg can do something clever with meta data and filenames so metadata follows the files.
Cheers,
27 November 2023 1:58pm
Zamba Cloud supports video!
28 November 2023 3:26am
Literally HOW did I miss that! Thanks for flagging!
ICOTEQ launch TAGRANGER® system of products
23 November 2023 1:25pm
AWMS Conference 2023
Eliminatha, WiCT 2023 Tanzania
21 November 2023 1:09pm
Passionate wildlife researcher and tech user, making strides in Grumeti, the heart of western Serengeti,Tanzania, using Camera Traps to gain priceless insights into the lives of this unique fauna and contributing greatly to understanding and preserving the Serengeti's ecosystems.
setting up a network of cameras connected to a server via WIFI
3 April 2022 7:19am
9 July 2022 6:03am
Great discussion! Pet (and other 'home') cams are an interesting option as @antonab mentioned. I've been testing one at home that physically tracks moving objects (and does a pretty good job of it), connects to my home network and can be live previewed, all for AUD69 (I bought it on special. Normal retail is AUD80):
On the Wifi front, and a bit of a tangent, has anyone done any work using 'HaLow' (see below for example) as it seems like an interesting way to extend Wifi networks?
17 July 2022 8:21am
Cool thread!
I will be testing Reolink Wi-Fi cameras in combination with solar powered TP-Link long range Wi-Fi antennas/repeaters later this field season for monitoring arctic fox dens at our remote off grid site in Greenland. The long range Wi-Fi antennas are rather power hungry but with sufficient solar panel and battery capacity I am hopeful it will work.
I am looking forward to explore the links and hints above after the field season.
Cheers,
20 November 2023 9:39am
Thank you for the links, Robin.
Alternative to Reconyx Ultrafire
8 November 2023 9:07am
18 November 2023 8:22pm
The two cameras you mention below tick off most of the items in your requirements list. I think the exception is the “timed start” whereby the camera would “wake up” to arm itself after a certain date. Camlockbox.com provides security boxes for both.
Especially if a white flash is useful in your research, you may also want to consider the GardePro T5WF. I don’t have a lot of long-term experience with this camera, but it is one of the few that offers a white flash, and it has excellent battery life, especially for night captures. The audio can be a little flaky
I have done posts on these cameras, including a teardown. See:
https://winterberrywildlife.ouroneacrefarm.com/2022/04/10/browning-spec-ops-elite-hp5-teardown/
https://winterberrywildlife.ouroneacrefarm.com/2022/09/26/inside-the-bushnell-core-ds-4k-trail-camera/
https://winterberrywildlife.ouroneacrefarm.com/2023/11/18/gardepro-t5wf-white-flash-trail-camera/
I have heard reports that the HP5 can let in moisture in very wet environments. This may be a direct water contact type of thing, as we have never had water issues with this camera when it is installed in a lock box (US Northeast, Northwest).
We prefer the HP5 due to superior image and audio quality. That said, there is a known issue that with some HP5 cameras, with some fast (> 80 MB/s rated read) and large SD cards, the SD card can become corrupted, preventing the camera from capturing images. I address this, including a fix via firmware, in another post:
https://winterberrywildlife.ouroneacrefarm.com/2023/11/16/fixing-browning-edge-elite-hp4-and-hp5-sd-card-corruption/
Hope this helps.
-bob
Ideas for easy/fast maintenance of arboreal camera traps
30 August 2023 8:51pm
15 November 2023 9:04pm
I use the same wifi trick with Reolink solar cameras looking at tree cavities (Austrian mountain forest). You can even put the mobile router on drone to get connection to the cameras.
16 November 2023 10:29pm
Yup Reolink is awesome! I've had a few cameras die on me, but hopefully I can get them repaired.
17 November 2023 9:19am
Hi Ben,
I would be interested to see if the Instant Detect 2.0 camera system might be useful for this.
The cameras can transmit thumbnails of the captured images using LoRa radio to a Base Station. You could then see all the captured images at this Base Station, as well as the camera's battery and memory information (device health). In addition, you could also change camera settings from the Base Station so you would not need to reclimb the trees to change from PIR sensitivity high to medium for instance.
The Instant Detect 2.0 cameras also have an external power port so a cable could be run to the ground to a DC 12V battery for long term power.
If you wanted to, you could also connect the Base Station to the Cloud using satellite connectivity, so that you can monitor the whole system remotely, but this will incur more cost and power usage of the Base Station.
I'd be keen to hear your thoughts,
Thanks,
Sam
Instant Detect 2.0 and related cost
16 November 2023 12:50am
16 November 2023 10:55am
Hi Kaarthika, hi all,
ZSL's Instant Detect 2.0 is currently undergoing Beta testing with external partners and so is still pre-production. We therefore do not have final pricing for the system.
Saying this, we have got a manufacturing partner fully set-up who has already completed two full build rounds of the system, one in 2020 and another in 2023. This means we actually have a very good idea of the system's build costs and what these are likely to be when we can manufacture the system in volume.
While I cannot release this pricing yet, I am confident that we will have an unparalleled proposition.
In particular, the satellite airtime package we can supply to conservationists due to the generosity of the Iridium Satellite Company means that each system can send 3,600 (25-50KB) images a month from anywhere in the world for a single fixed fee. This equates to around a 97% discount to the normal commercial rates.
We are currently very busy fundraising so that we can make this final step to scale the system.
If we can secure this funding, we hope to go into volume production by mid-2024.
Best wishes,
Sam
16 November 2023 2:12pm
Thank You for your valuable update Sam
16 November 2023 2:30pm
Hello Sam,
What would you say would be the estimate cost was for the first version Instant Detect 1.0 ? That might help my research ?
Research/Field Technician - SPEC Lab Ocelot Research Program
16 November 2023 1:58pm
Insight; a secure online platform designed for sharing experiences of conservation tool use.
7 November 2023 1:01pm
A secure platform designed for those working to monitor & protect natural resources. Insight facilitates sharing experience, knowledge & tools to increase efficiency & effectiveness in conservation. By sharing we reduce time & money spent to find, test, & implement solutions.
DeepFaune: a software for AI-based identification of mammals in camera-trap pictures and videos
14 July 2023 3:14pm
24 October 2023 8:46pm
Hello to all, new to this group. This is very exciting technology. can it work for ID of individual animals? we are interested in Ai for identifying individual jaguars (spots) and andean Bears (face characteristics). Any recommendation? contact? thanks!
German
25 October 2023 8:57am
That's a very interesting question and use case (I'm not from deepfaune). I'm playing with this at the moment and intend to integrate it into my other security software that can capture and send video alerts. I should have this working within a few weeks I think.
The structure of that software is that it is two stage, the first stage identifies that there is an animal and it's bounding box and then there's a classification stage. I intend to merge the two stages so that it behaves like a yolo model so that the output is bounding boxes as well as what type of animal it is.
However, my security software can cascade models. So if you were able to train a single stage classifier that identifies your particular bears, then you could cascade all of these models in my software to generate an alert with a video saying which bear it was.
4 November 2023 4:51am
Hi @GermanFore ,
I work with the BearID Project on individual identification of brown bears from faces. More recently we worked on face detection across all bear species and ran some tests with identifying Andean bears. You can find details in the paper I linked below. We plan to do more work with Andean bears in 2024.
I would love to connect with you. I'll send you a message with my email address.
Regards,
Ed

Multispecies facial detection for individual identification of wildlife: a case study across ursids
Mammalian Biology - To address biodiversity decline in the era of big data, replicable methods of data processing are needed. Automated methods of individual identification (ID) via computer vision...

Nepal's tiger conservation gets tech boost with AI-powered deer tracking
30 October 2023 1:23pm
Researchers in Nepal are using vertical cameras and AI technology to track and profile individual spotted deer (Axis axis), similar to the methods used for tigers.
Modifying GoPro cameras to be IR sensitive.
25 October 2023 6:38pm
27 October 2023 6:35am
Hi Jay!
Thanks for posting this here as well as your great presentation in the Variety Hour the other day!
Cheers!
5 Trailblazing Wildlife Monitoring Tech Solutions across East Africa. What Monitoring Technologies are you using?
25 October 2023 12:40pm
Metadata standards for Automated Insect Camera Traps
24 November 2022 9:49am
2 December 2022 3:58pm
Yes. I think this is really the way to go!
6 July 2023 4:48am
Here is another metadata initiative to be aware of. OGC has been developing a standard for describing training datasets for AI/ML image recognition and labeling. The review phase is over and it will become a new standard in the next few weeks. We should consider its adoption when we develop our own training image collections.

OGC Seeks Public Comment on New Standard for Training Data for AI/ML Applications - Open Geospatial Consortium
OGC TrainingDML-AI Standard defines the model and encodings for standardizing any training data used to train, validate, and test Machine Learning models that involve location or time.

24 October 2023 9:12am
For anyone interested: the GBIF guide Best Practices for Managing and Publishing Camera Trap Data is still open for review and feedback until next week. More info can be found in their news post.
Best,
Max
Trail cam recommendations for capturing small, quick mammals at night?
16 October 2023 12:01am
20 October 2023 3:24pm
Hi @MaddievdW have you considered using a 'tunnel' to help, well, funnel, small critters into a space with a camera that makes it a bit easier for detections? A few years back, we made a PVC pipe tunnel with a protected food lure that seemed to work well with even cheapy trail cameras (and in fact, we ended up having to block some of the IR illumination using a few layers of athletic tape over the LED array). Here's a rather blurry image of a bandicoot we got:
And another of an antechinus:

We weren't that concerned with image quality, as we were actually testing an RFID logger in the tunnel also, and simply wanted to know if we got critters with no tags and what species (if possible). So one thing I'd definitely consider if you go down this path is focal length of the camera. Here's a previous discussion on a similar idea:

Comparisons: Close-up Lenses for Camera Traps | WILDLABS
Hey Guys! We're setting up a camera trap within a woodpile to see what mammals enter/exit the nesting box we've included there. For this purpose, we'd need a camera trap with a close focus lense as well as the ability to download the data from the camera via USB cable (since the camera won't be accessible once its installed in the wood pile). Bushnell NaureView (119740) does have a close up lense included but does not support downloading data via USB Bushnell TrophyCam (119874) does support downloading data via USB but does not have a close up lense included. Does anybody have a solution the solves both problems? What would help a lot would be if there was a attachable close up lense that can be bought seperatly, but I haven't found any yet. Thanks in advance! Nils PS: Using a wireless transfer (WLAN / GSM) of the images is not an option
I guess you could always 'calibrate' it to a certain extent by just using a longer section of pipe. Here's what our set up looked like, but again, we had the RFID logger, so a camera-only version would be a bit simpler - the camera and battery was in the screw top section on the left-hand-side of the tunnel image and each entrance had RFID antennas linked backed to a logger in the same compartment as the camera:

All the best for your research,
Rob
20 October 2023 5:06pm
Addressing each of the questions/issues posed:
Triggering Camera:
If you are getting triggers, but empty frames, during known visits by these lickety-split animals, the issue is the trigger speed. Looking at the Browning selection guide, for example, https://browningtrailcameras.zendesk.com/hc/en-us/article_attachments/12697703673243
I see that the Elite-HP5 models have a 100 ms advertised min trigger speed, which is slightly (50 ms) than the Dark Ops Pro DCL. This is equivalent to 2 earlier frames (at 60 FPS video), which could be significant with fast moving targets.
I have found, BTW, counterintuitively, that for Browning SpecOps and ReconForce models (Elite-HP5) that camera gets to first frame sooner when taking videos vs. when taking stills. I don’t understand this completely, but it’s a thing.
If you are not getting any triggers, then the PIR sensor is somehow missing the target. Make sure you understand the “detection zones” supported by your camera. These are not published, but can be determined with some patience and readily available “equipment” – see my post on “Trail Camera Detection Zones” at https://winterberrywildlife.ouroneacrefarm.com/2022/08/01/deep-tech-trail-camera-detection-zones/
Putting more than one camera at a site may also increase the probability that at least one triggers (and may improve lighting, see below)
If you’re consistently missing triggers, you may have to consider a non-PIR sensor. Unfortunately, this removes you from the domain of commercial trail cameras. Cognisys makes a number of “active” sensors based on “break beam” and (now) lidar for use with DSLR-based camera traps. You would also have to come up with your own no-glow lighting source, and hack the DLSR camera to remove the (built-onto-the sensor) IR filter. In our experience, these sets are 10x more expensive and time consuming vs. commercial trail camera sets, and are only justified by the potential for (a few) superior images.
The species-specific triggers and sets mentioned on this thread seem like a better option.
Avoiding daytime false triggers: All the commercial trail cameras I’m aware of have a single type of trigger sensor. It is based on a Passive InfraRed (PIR) sensor and Fresnel lens. Apps, Weldon and McNutt cover this admirably in Peter Apps, John Weldon McNutt, “How camera traps work and how to work them,” African Journal of Ecology, 2018.
These sensors trigger on changes in certain areas of the thermal field – in practice a combination of a heat and motion in one or more detection zones. They are not decomposable.
Some cameras (e.g. Browning SpecOps, and maybe the Dark Ops Pro?) allow you to set hours of operation so that the camera only triggers at night, for example. This would cause you to (for sure) miss “off hours” appearances by your target species, but would avoid daytime false triggers.
No Glow Image Quality: The good news about longer wavelength “No-Glow” flashes is that animals are less sensitive to them. The bad news is that the CMOS image sensors used by cameras are also less sensitive to the longer IR. Less signal leads to lower quality images. Others have mentioned adding supplemental no-glow illumination. An easy way to do this would be to set up two cameras at each of your sites. When they are both triggered, each will “see” twice as much illumination, and image quality will be improved. Browning SpecOps models (at least) have dynamic exposure control on video which allows this scheme to work (with only a frame or two of washout) while the algorithm adjusts exposure). For an example of this effect, see opening porcupine sequence in our video at https://www.youtube.com/watch?v=itx7KnlxKS4
21 October 2023 12:52am
Hi Maddie,
This camera has a very quick reaction time.

Reconyx HF2X HyperFire 2 Covert IR Camera
The HF2X is a state-of-the-art digital camera with passive infrared (PIR) motion detection and a high-output covert infrared nighttime illuminator—all within a secure, weather-resistant case. One of the best-performing, most reliable trail cameras on the market, it is backed by a 5-year manufacturer warranty.

Shedding light on nocturnal behavior: A cost-effective solution for remote, infrared video recording in the field
20 October 2023 12:31pm
Do you or someone you love study nocturnal animals? Do you want to film behavior in infrared, but NOT spend $1k on a camera (that will die in the field anyways)? Is short battery life a constant battle? Here, this preprint will help!
To find out more, come to Variety Hour!
Correspondence among multiple methods provides confidence when measuring marine protected area effects for species and assemblages
20 October 2023 12:28pm
When considering MPA groundfish monitoring methods: the more, the merrier! New paper out in Journal of Applied Ecology compared three MPA monitoring techniques commonly utilised to survey groundfish populations. They found that using multiple methods was the best approach.
Catch up with The Variety Hour: October 2023
19 October 2023 11:59am
Thermal cameras for monitoring visitors in highly vulnerable conservation areas
21 June 2022 3:44pm
20 June 2023 12:57pm
Hi,
I have been involved in people detection tech for more than 10 years and have an open source project that uses CNN computer vision for object detection and alerting, including people (
GitHub - hcfman/sbts-install: Installs StalkedByTheState over the sbts-base system to build a home and business security appliance on NVIDIA Jetson series computers.
Installs StalkedByTheState over the sbts-base system to build a home and business security appliance on NVIDIA Jetson series computers. - GitHub - hcfman/sbts-install: Installs StalkedByTheState ov...

I also have several thermal cameras here. note, the people detection can also work fine on thermal images. Mostly I've stopped using thermal cameras because normal cameras work so well with modern models that they no longer have an advantage perse. If you really need to detect people in pitch darkness though, using a thermal camera in combination with image detection would be better than just thermal movement detection.
Let me know if I can help in any way. I know this is quite an old post though.
22 September 2023 12:00pm
I would be interested how you setted up this system. Which model do yu use and how are they connected?
6 October 2023 12:27pm
We have the the FLIR FC series (FC 618) thermal cameras setup. with regards to the connection between the cameras and monitoring station, this through fiber cable and microwave radio links.
GEO BON Monitoring Biodiversity for Action
5 October 2023 3:10pm
Good Thermal/ Night Vision Cameras?
1 September 2023 7:15pm
14 September 2023 1:31pm
@LucyHReaserRe At this area in the past, we have tried using a normal trail IR camera, but with very limited sensitivity. I have thought about adding the IR fog lights out there to help, but was leaning towards the thermal cameras to allow for more types of data to be taken from the images in the future i.e. age class based on heat signatures.
Thank you all for providing input, I will look into each of these ideas!
22 September 2023 11:13am
I'm jupping into the discussion, with a similar objective. I'm looking for a thermal camera trap, (I know cacophony). it would be use to improve invasive speices monitoring especially for rats and feral cats.
Any idea?
Thanks
22 September 2023 12:56pm
Hi @mguins , as @kimhendrikse mentioned resolution (and also brand) for thermal cameras can dictate a big jump in price. GroupGets has a budget Lepton (FS - short for 'factory second' I think) if you wanted to check one out:

Teledyne FLIR LEPTON® FS
Lepton FS is a non-radiometric 160 x 120 resolution micro thermal camera module with reduced thermal sensitivity, reduced scene dynamic range, and up to 3% inoperable pixels but the lowest price ever for a factory Lepton. These units balance performance and price, enabling monitoring applications where radiometry is not required and pixel-level image information is less important than broad thermal data. With the lowest cost per pixel in the Lepton family, low power consumption, and simple integration, Lepton FS provides integrators an appropriate thermal capability for innovative thermal monitoring products in smart building automation, security, occupancy sensing, and more. Just as we were the first place to get FLIR Lepton seven years ago, we are bringing this new Lepton to you first at a highly compelling price to make your thermal imaging application more economically viable. To celebrate, all accessory boards below are 10% off excluding the new tCam-mini Wi-Fi board. 160 x 120 RESOLUTION NON-RADIOMETRIC MICRO THERMAL CAMERA COMMON INTERFACES AND FACTORY SUPPORT TO SHORTEN TIME TO MARKET Standard Lepton mechanical interface, electrical interface, and US-based Technical Services team. 160 x 120 thermal pixel resolution Low operating power – 140 mW typical and 650 mW during shutter event Small 11.8 x 12.7 x 7.2 mm package ACCEPTABLE PERFORMANCE AT DISCOUNTED PRICE Affordable, non-radiometric thermal imagery and data. -10 °C to 350 °C scene dynamic range Thermal sensitivity <75 mK ≤3% inoperable pixels BUILD INNOVATIVE THERMAL MONITORING SOLUTIONS Build innovative thermal monitoring solutions appropriate for heat, security, and comfort monitoring applications. Home and building automation Heat and occupancy sensing Security and location monitoring Specifications Overview Sensor technology Uncooled VOx microbolometer Spectral range Longwave infrared, 8 μm to 14 μm Array format 160 x 120, progressive scan Pixel size 12 μm Effective frame rate 8.7 Hz (commercial application exportable) Thermal sensitivity <75mK NEdt Operability Number of non-defective pixels shall be >97% Adjacent clusters, rows, columns may contain defective pixels that are not factory corrected and unable to be corrected. 3% of operability failures allowed. Temperature compensation Automatic. Output image independent of camera temperature. Non-Radiometric Performance While some of the Lepton FS units may output radiometric values per pixel, the Lepton FS units are not guaranteed against any radiometric accuracy and users whom wish to use Lepton FS products for radiometric applications do so at their own risk. Teledyne FLIR and GroupGets will not support questions pertaining to calibrating Lepton FS units. Non-uniformity corrections Integral Shutter Scene dynamic range High Gain Mode: -10 to 140 degrees C typical Low Gain Mode: -10 to 350 degrees C typical Image optimization Factory configured and fully automated FOV - horizontal 57° FOV - diagonal 71° Lens Type f/1.1 Output format User-selectable 14-bit, 8-bit (AGC applied), or 24-bit RGB (AGC and colorization applied) Solar protection Integral Electrical Input clock 25-MHz nominal, CMOS IO Voltage Levels Video data interface Video over SPI Control port CCI (I2C-like), CMOS IO Voltage Levels Input supply voltage (nominal) 2.8 V, 1.2 V, 2.5 V to 3.1 V IO Power dissipation (Typical, room temp) Nominally 150 mW (operating), 650 mW (during shutter event), 5 mW (standby) Mechanical Package dimensions without socket (w x l x h) 11.50 x 12.70 x 6.835 mm Weight 0.91 grams Environmental Optimum operating temperature range -10C to +65C Non-operating temperature range -40 C to +80 C Shock 1500 G @ 0.4 ms Part Numbers 500-0771-FS1 Documentation Lepton Google Group Lepton FS Datasheet Lepton Engineering Data Sheet rev 204 Lepton Software Interface Description Document (IDD) rev 303 Advanced Lepton Usage on Windows Getting Started with Lepton on Windows and PureThermal Basic Features Getting Started with the BeagleBone and Breakout Board V2.0 Getting Started with the Raspberry Pi and Breakout Board V2.0 Lepton with Radiometry Quick Start Guide Lepton vs Lepton 3-App Note Mechanical FLIR Lepton 2.5 and 3.5 3D Model (step) IDD CAD data LEPTON 2.5, 80x60 IDD CAD data LEPTON 3.5, 160x120 IDD CAD data LEPTON 3.1R, 160x120 Software GroupGets GetThermal Viewer (Linux and macOS) FLIR Lepton User App for Windows v1.3.2 PureThermal Firmware FLIR Lepton SDK Development Hardware GroupGets - PureThermal Mini Pro JST-SR GroupGets - PureThermal 3 GroupGets - PureThermal Breakout Board GroupGets - PureThermal Micro Breakout Board GroupGets - PureThermal OpenMV FlyTron - DroneThermal v4 Teledyne FLIR - FLIR Lepton Breakout Board v2.0 danjuliodesigns LLC - tCam Mini v4 danjuliodesigns LLC - gCore OpenMV - FLIR Lepton Adapter Module for OpenMV Cam H7 Tips for integrating Lepton into a Housing Recommended Material PORON® Socket Stack Socket > Lepton > Foam > Window > Enclosure Maximum Force < 1Kg (more force will damage the shutter) The foam must be cut out so that it does not interfere with the mechanical shutter pins. Thickness Depends on the enclosure and spacing, do not exceed 1Kg of force. In the News CNX Software - $99 Lepton FS module cuts the cost of FLIR thermal cameras by half This Model: Lepton FS, 160×120 pixels, 57° with shutter - NOT Radiometric Important Information PureThermal 1/2/3/Mini/Pro with Radiometric FLIR Lepton 2.5, 3.5, 3.1R requires specialized software to show radiometry such as GetThermal. All international customers that are ordering any Thermal Imaging Camera will be required to fill out an End Use Statement document before shipment. Once you place your order we will email you with the document within a few business days. We can only ship FLIR Lepton to North America, Australia, Switzerland, United Kingdom and the European Union Countries. Any orders with a FLIR Lepton with a delivery address other than those will be canceled and refunded. We are sorry for the inconvenience. Number of non-defective pixels shall be >97% Adjacent clusters, rows, columns may contain defective pixels that are not factory corrected and unable to be corrected. 3% of operability failures allowed. Temperature compensation Automatic. Output image independent of camera temperature. Lepton FS units are not guaranteed against any radiometric accuracy and users whom wish to use Lepton FS products for radiometric applications do so at their own risk. Teledyne FLIR and GroupGets will not support questions pertaining to calibrating Lepton FS units. RoHS Certificate of Compliance European Directive No 2015/863 Declaration of Compliance REACH Regulation (EC) No. 1907/2006 California Proposition 65 Declaration ECCN: CAM 6A993A HTSUS: 9027.50.4020 Country of Origin: USA Part Number: 500-0771-FS1

They also have a bunch of other Flir products and boards for interfacing with Leptons etc., so worth a browse of the shop. It could also be worth taking a look at Seek modules, some of which @Alasdair has experience with : (e.g.

They also have modules you can connect to a mobile phone:

@TopBloke I'd be very keen to see your Lepton camera trap too!
Cheers,
Rob
Turn old smartphone into IA camera trap?
24 September 2021 1:08pm
11 July 2023 10:56am
Any news regarding this topic ?
14 July 2023 4:05pm
Despite the power challenges noted in this thread, I think the “used” stream of smart phones is a viable platform for trail cameras. Having successfully hacked in custom features into closed source trail camera firmware (https://github.com/robertzak133/unified-btc-reverse) , I am also hoping that software development on smart phones is a better way to do feature innovation on trail cameras.
I have just “started” on a trail cam app for iPhone 12 pro (not that “old” yet, but it will be by the time I’m done, and it’s the first iPhone with LIDAR). I have done some toy apps on the iPhone before, but am mostly blissfully unaware of how much work this will be :) None-the-less, goal is to have a prototype working in the back yard by July 15, 2024. I’ll post a project link on this thread as soon as it’s up.
I’m just working on requirements now. My primary focus is on improving image quality, and capture efficiency vs. existing trail cameras for wildlife photography. For example:
- Using camera image quality library to improve low light captures, exposure, etc.
- Improving trigger versatility and accuracy using LIDAR sensor
- Tracking auto-focus based on LIDAR
- Negative trigger delay for daylight shots
- Support for custom lighting via “ensemble” sets
It seems wrong not to leverage cellular connectivity, though this is a lower priority for me because most of our sets are beyond cell phone coverage.
I did find an interesting app – “Motion Detector Camera” by Phil Bailey https://apps.apple.com/us/app/motion-detector-cam/id461753935 this app uses some parameterizable motion detection algorithm to trigger still images (free version). It’s pretty slick. I’m starting w/ LIDAR because I want to have a trigger that works in the dark.
Note that none of these require any AI processing of the images, though I have no doubt a smart phone would be a great place to do that one way or another. Do you have specific usage models/requirements in mind for in-phone image processing/classification?
22 September 2023 11:30am
Keep us in touch looks promissing!
Q&A: UK NERC £3.6m AI (image) for Biodiversity Funding Call - ask your questions here
13 September 2023 4:10pm
21 September 2023 4:27pm
This is super cool! Me and @Hubertszcz and @briannajohns and several others are all working towards some big biodiversity monitoring projects for a large conservation project here in panama. The conservation project is happening already, but hubert starts on the ground work in January and im working on a V3 of our open source automated insect monitoring box to have ready for him by then.
I guess my main question would be if this funding call is appropriate/interested for this type of project? and what types of assistance are possible through this type of funding (researchers? design time? materials? laboratory field construction)
Camera Trap Data Management Survey: Results
20 September 2023 1:46am
Camera traps statistics
18 September 2023 10:22am
GBIF guide to managing and publishing camera trap data opens for community review
15 September 2023 2:23pm
29 November 2023 3:06pm
Hello Lyuboslava,
As you said, the first thing that came to mind was running a CV program to read the needed data. If the data does not change in the duration of the footage, then, in my humble opinion, it would be easiest to take a single frame from each clip and run an optical character recognition program to get the temperature.
The moon phase might pose a bit more of a challenge if the meta data truly is hidden. A different CNN program that is trained on the moon phases should suffice. But as you well know, not the simplest solution.