Camera Traps / Feed

Looking for a place to discuss camera trap troubleshooting, compare models, collaborate with members working with other technologies like machine learning and bioacoustics, or share and exchange data from your camera trap research? Get involved in our Camera Traps group! All are welcome whether you are new to camera trapping, have expertise from the field to share, or are curious about how your skill sets can help those working with camera traps. 


Video camera trap analysis help

Hello,I'm a complete newbie so any help would be appreciated. I never trained any ML models (have junior/mid experience with Python and R) nor annotated the data but would like to...

8 0

Hi there!, 

You should definitely check out VIAME, which includes a video annotation tool in addition to deep learning neural network training and deployment. It has a user friendly interface, has a publicly available server option that mitigates the need for GPU enabled computer for network training, and has an amazing support staff that help you with your questions. You can also download the VIAME software for local use. The tool was originally developed for marine life annotation, but can be used for any type of video or annotation (we are using it to annotate pollinators in video). Super easy to annotate as well. Worth checking out!  

Liz Ferguson

See full post

Is anyone or platform supporting ML for camera trap video processing (id-ing jaguar)? 

Hi wildlabbers, I have another colleague looking for support for getting started using AI for processing videos coming out of their camera traps - specifically for species ID...

9 0

Hey there community! Im new here and looking after lots of answers too! ;-)

We are searching aswell for the most ideal App / AI technology to ID different cats, but also other mammals if possible 

- Panthera onca

- Leopardus wiedii

- Leopardus pardalis


and if possible:


- Puma concolor

- Puma yagouaroundi

- Leopardus colocolo

- Tapirus terrestris


Every recommendation is very welcome, thanks!


See full post

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

7 0

Hi Lucy

As others have mentioned, camera trap temperature readouts are inaccurate, and you have the additional problem that the camera's temperature can rise 10C if the sun shines on it.

I would also agree with the suggestion of getting the moon phase data off the internet.


Do you need to do this for just one project?  And do you use the same camera make/model for every deployment?  Or at least a finite number of camera makes/models?  If the number of camera makes/models you need to worry about is finite, even if it's large, I wouldn't try to solve this for the general case, I would just hard-code the pixel ranges where the temperature/moon information appears in each camera model, so you can crop out the relevant pixels without any fancy processing.  From there it won't be trivial, exactly, but you won't need AI. 

You may need separate pixel ranges for night/day images for each camera; I've seen cameras that capture video with different aspect ratios at night/day (or, more specifically, different aspect ratios for with-flash and no-flash images).  If you need to determine whether an image is grayscale/color (i.e., flash/no-flash), I have a simple heuristic function for this that works pretty well.

Assuming you can manually define the relevant pixel ranges, which should just take a few minutes if it's less than a few dozen camera models, I would extract the first frame of each video to an image, then crop out the temperature/moon pixels.

Once you've cropped out the temperature/moon information, for the temperature, I would recommend using PyTesseract (an OCR library) to read the characters.  For the moon information... I would either have a small library of images for all the possible moon phases for each model, and match new images against those, or maybe - depending on the exact style they use - you could just, e.g., count the total number of white/dark pixels in that cropped moon image, and have a table that maps "percentage of white pixels" to a moon phase.  For all the cameras I've seen with a moon phase icon, this would work fine, and would be less work than a template matching approach.

FYI I recently wrote a function to do datetime extraction from camera trap images (it would work for video frames too), but there I was trying to handle the general case where I couldn't hard-code a pixel range.  That task was both easier and harder than what you're doing here: harder because I was trying to make it work for future, unknown cameras, but easier because datetimes are relatively predictable strings, so you know when you find one, compared to, e.g., moon phase icons.

In fact maybe - as others have suggested - extracting the moon phase from pixels is unnecessary if you can extract datetimes (either from pixels or from metadata, if your metadata is reliable).

camtrapR has a function that does what you want. i have not used it myself but it seems straightforward to use and it can run across directories of images:

See full post

Eliminatha, WiCT 2023 Tanzania

Passionate wildlife researcher and tech user, making strides in Grumeti, the heart of western Serengeti,Tanzania, using Camera Traps to gain priceless insights into the lives of this unique fauna and contributing greatly to understanding and preserving the Serengeti's ecosystems.


setting up a network of cameras connected to a server via WIFI

We need to set up a wildlife monitoring network based on camera traps in Doñana National Park, Spain (see  We are interested in setting...

12 0

Great discussion! Pet (and other 'home') cams are an interesting option as @antonab mentioned. I've been testing one at home that physically tracks moving objects (and does a pretty good job of it), connects to my home network and can be live previewed, all for AUD69 (I bought it on special. Normal retail is AUD80): 

On the Wifi front, and a bit of a tangent, has anyone done any work using 'HaLow' (see below for example) as it seems like an interesting way to extend Wifi networks?

Cool thread!

I will be testing Reolink Wi-Fi cameras in combination with solar powered TP-Link long range Wi-Fi antennas/repeaters later this field season for monitoring arctic fox dens at our remote off grid site in Greenland. The long range Wi-Fi antennas are rather power hungry but with sufficient solar panel and battery capacity I am hopeful it will work. 
I am looking forward to explore the links and hints above after the field season. 

See full post

Alternative to Reconyx Ultrafire

The Reconyx Ultrafire has been discontinued, and we rely on these heavily for research because of the modifiability and reliability in adverse conditions. We need a high quality...

1 0

The two cameras you mention below tick off most of the items in your requirements list.  I think the exception is the “timed start” whereby the camera would “wake up” to arm itself after a certain date. provides security boxes for both.    

Especially if a white flash is useful in your research, you may also want to consider the GardePro T5WF.  I don’t have a lot of long-term experience with this camera, but it is one of the few that offers a white flash, and it has excellent battery life, especially for night captures.  The audio can be a little flaky   

I have done posts on these cameras, including a teardown.  See:

I have heard reports that the HP5 can let in moisture in very wet environments.  This may be a direct water contact type of thing, as we have never had water issues with this camera when it is installed in a lock box (US Northeast, Northwest).    


We prefer the HP5 due to superior image and audio quality.  That said, there is a known issue that with some HP5 cameras, with some fast (> 80 MB/s rated read) and large SD cards, the SD card can become corrupted, preventing the camera from capturing images.  I address this, including a fix via firmware, in another post:

Hope this helps. 


See full post

Ideas for easy/fast maintenance of arboreal camera traps 

Hi ,A section of my upcoming project will include the deployment of arboreal camera traps up large fruiting trees In primary rainforest of PNG. It would be ideal if these camera...

9 1

I use the same wifi trick with Reolink solar cameras looking at tree cavities (Austrian mountain forest). You can even put the mobile router on  drone to get connection to the cameras. 

Hi Ben,

I would be interested to see if the Instant Detect 2.0 camera system might be useful for this.

The cameras can transmit thumbnails of the captured images using LoRa radio to a Base Station. You could then see all the captured images at this Base Station, as well as the camera's battery and memory information (device health). In addition, you could also change camera settings from the Base Station so you would not need to reclimb the trees to change from PIR sensitivity high to medium for instance.

The Instant Detect 2.0 cameras also have an external power port so a cable could be run to the ground to a DC 12V battery for long term power.

If you wanted to, you could also connect the Base Station to the Cloud using satellite connectivity, so that you can monitor the whole system remotely, but this will incur more cost and power usage of the Base Station.

I'd be keen to hear your thoughts,



See full post

Instant Detect 2.0 and related cost

I am doing a research project on rhino poaching at Kruger National Park. I was impressed with the idea of Instant Detect 2.0. I do not know the cost involved with installing that...

3 0

Hi Kaarthika, hi all,

ZSL's Instant Detect 2.0 is currently undergoing Beta testing with external partners and so is still pre-production. We therefore do not have final pricing for the system. 

Saying this, we have got a manufacturing partner fully set-up who has already completed two full build rounds of the system, one in 2020 and another in 2023. This means we actually have a very good idea of the system's build costs and what these are likely to be when we can manufacture the system in volume.

While I cannot release this pricing yet, I am confident that we will have an unparalleled proposition.

In particular, the satellite airtime package we can supply to conservationists due to the generosity of the Iridium Satellite Company means that each system can send 3,600 (25-50KB) images a month from anywhere in the world for a single fixed fee. This equates to around a 97% discount to the normal commercial rates. 

We are currently very busy fundraising so that we can make this final step to scale the system. 

If we can secure this funding, we hope to go into volume production by mid-2024.

Best wishes,


See full post

DeepFaune: a software for AI-based identification of mammals in camera-trap pictures and videos

Hello everyone, just wanted to advertise here the DeepFaune initiative that I lead with Vincent Miele. We're building AI-based species recognition models for camera-trap...

6 4

Hello to all, new to this group. This is very exciting technology. can it work for ID of individual animals? we are interested in Ai for identifying individual jaguars (spots) and andean Bears (face characteristics). Any recommendation? contact? thanks!


That's a very interesting question and use case (I'm not from deepfaune). I'm playing with this at the moment and intend to integrate it into my other security software that can capture and send video alerts. I should have this working within a few weeks I think.

The structure of that software is that it is two stage, the first stage identifies that there is an animal and it's bounding box and then there's a classification stage. I intend to merge the two stages so that it behaves like a yolo model so that the output is bounding boxes as well as what type of animal it is.

However, my security software can cascade models. So if you were able to train a single stage classifier that identifies your particular bears, then you could cascade all of these models in my software to generate an alert with a video saying which bear it was.

Hi @GermanFore ,

I work with the BearID Project on individual identification of brown bears from faces. More recently we worked on face detection across all bear species and ran some tests with identifying Andean bears. You can find details in the paper I linked below. We plan to do more work with Andean bears in 2024.

I would love to connect with you. I'll send you a message with my email address.



See full post

Modifying GoPro cameras to be IR sensitive.

Hi all,As a sensory ecologist interested in visually-mediated behaviors of nocturnal animals, a major struggle I encountered early in my career was how to monitor these creatures...

1 1

Hi Jay! 

Thanks for posting this here as well as your great presentation in the Variety Hour the other day!


See full post

Metadata standards for Automated Insect Camera Traps

Have others watched this webinar from GBIF introducing the data model for camera trap data. I wonder if this is something we can easily adopt/adapt for our sorts of camera traps?

5 3

Yes. I think this is really the way to go!

Here is another metadata initiative to be aware of. OGC has been developing a standard for describing training datasets for AI/ML image recognition and labeling. The review phase is over and it will become a new standard in the next few weeks. We should consider its adoption when we develop our own training image collections.

See full post

Trail cam recommendations for capturing small, quick mammals at night?

Kia ora,can anyone recommend a trail camera model that is consistently triggered by quick, small mammals e.g. rats/mice/stoats at night? Or for a trail cam that captures sharp...

7 0

Hi @MaddievdW have you considered using a 'tunnel' to help, well, funnel, small critters into a space with a camera that makes it a bit easier for detections? A few years back, we made a PVC pipe tunnel with a protected food lure that seemed to work well with even cheapy trail cameras (and in fact, we ended up having to block some of the IR illumination using a few layers of athletic tape over the LED array). Here's a rather blurry image of a bandicoot we got:

And another of an antechinus:


We weren't that concerned with image quality, as we were actually testing an RFID logger in the tunnel also, and simply wanted to know if we got critters with no tags and what species (if possible). So one thing I'd definitely consider if you go down this path is focal length of the camera. Here's a previous discussion on a similar idea: 

I guess you could always 'calibrate' it to a certain extent by just using a longer section of pipe. Here's what our set up looked like, but again, we had the RFID logger, so a camera-only version would be a bit simpler - the camera and battery was in the screw top section on the left-hand-side of the tunnel image and each entrance had RFID antennas linked backed to a logger in the same compartment as the camera:

All the best for your research,




Addressing each of the questions/issues posed: 

Triggering Camera:   

If you are getting triggers, but empty frames, during known visits by these lickety-split animals, the issue is the trigger speed.  Looking at the Browning selection guide, for example,

I see that the Elite-HP5 models have a 100 ms advertised min trigger speed, which is slightly (50 ms) than the Dark Ops Pro DCL.  This is equivalent to 2 earlier frames (at 60 FPS video), which could be significant with fast moving targets.  

I have found, BTW, counterintuitively, that for Browning SpecOps and ReconForce models (Elite-HP5) that camera gets to first frame sooner when taking videos vs. when taking stills.  I don’t understand this completely, but it’s a thing.  

If you are not getting any triggers, then the PIR sensor is somehow missing the target.  Make sure you understand the “detection zones” supported by your camera.  These are not published, but can be determined with some patience and readily available “equipment” – see my post on “Trail Camera Detection Zones” at

Putting more than one camera at a site may also increase the probability that at least one triggers (and may improve lighting, see below)

If you’re consistently missing triggers, you may have to consider a non-PIR sensor.  Unfortunately, this removes you from the domain of commercial trail cameras.  Cognisys makes a number of “active” sensors based on “break beam” and (now) lidar for use with DSLR-based camera traps.   You would also have to come up with your own no-glow lighting source, and hack the DLSR camera to remove the (built-onto-the sensor) IR filter.  In our experience, these sets are 10x more expensive and time consuming vs. commercial trail camera sets, and are only justified by the potential for (a few) superior images. 

The species-specific triggers and sets mentioned on this thread seem like a better option. 


Avoiding daytime false triggers: All the commercial trail cameras I’m aware of have a single type of trigger sensor. It is based on a Passive InfraRed (PIR) sensor and Fresnel lens.  Apps, Weldon and McNutt cover this admirably in Peter Apps, John Weldon McNutt, “How camera traps work and how to work them,” African Journal of Ecology, 2018.

These sensors trigger on changes in certain areas of the thermal field – in practice a combination of a heat and motion in one or more detection zones. They are not decomposable.

Some cameras (e.g. Browning SpecOps, and maybe the Dark Ops Pro?) allow you to set hours of operation so that the camera only triggers at night, for example.  This would cause you to (for sure) miss “off hours” appearances by your target species, but would avoid daytime false triggers. 

No Glow Image Quality: The good news about longer wavelength “No-Glow” flashes is that animals are less sensitive to them. The bad news is that the CMOS image sensors used by cameras are also less sensitive to the longer IR. Less signal leads to lower quality images.  Others have mentioned adding supplemental no-glow illumination.  An easy way to do this would be to set up two cameras at each of your sites.  When they are both triggered, each will “see” twice as much illumination, and image quality will be improved.  Browning SpecOps models (at least) have dynamic exposure control on video which allows this scheme to work (with only a frame or two of washout) while the algorithm adjusts exposure). For an example of this effect, see opening porcupine sequence in our video at


Hi Maddie,

This camera has a very quick reaction time. 

See full post

Good Thermal/ Night Vision Cameras?

Hi! I am doing research for starting up my thesis and am trying to figure out the best equipment to do it. My idea is to get night time data of seals hauling out (laying on...

12 0

@LucyHReaserRe At this area in the past, we have tried using a normal trail IR camera, but with very limited sensitivity. I have thought about adding the IR fog lights out there to help, but was leaning towards the thermal cameras to allow for more types of data to be taken from the images in the future i.e. age class based on heat signatures. 

Thank you all for providing input, I will look into each of these ideas! 

I'm jupping into the discussion, with a similar objective. I'm looking for a thermal camera trap, (I know cacophony). it would be use to improve invasive speices monitoring especially for rats and feral cats.

Any idea?


Hi @mguins , as @kimhendrikse mentioned resolution (and also brand) for thermal cameras can dictate a big jump in price. GroupGets has a budget Lepton (FS - short for 'factory second' I think) if you wanted to check one out: 

They also have a bunch of other Flir products and boards for interfacing with Leptons etc., so worth a browse of the shop. It could also be worth taking a look at Seek modules, some of which @Alasdair has experience with : (e.g. 

They also have modules you can connect to a mobile phone: 

@TopBloke I'd be very keen to see your Lepton camera trap too! 






See full post

Turn old smartphone into IA camera trap?

I know that there is several IA camera trap development ongoing from the poachercam to trailguard...ects... I also know that it is possible to turn an old phone into a security...

12 0

Any news regarding this topic ?

Despite the power challenges noted in this thread, I think the “used” stream of smart phones is a viable platform for trail cameras.  Having successfully hacked in custom features into closed source trail camera firmware ( , I am also hoping that software development on smart phones is a better way to do feature innovation on trail cameras.

I have just “started” on a trail cam app for iPhone 12 pro (not that “old” yet, but it will be by the time I’m done, and it’s the first iPhone with LIDAR).    I have done some toy apps on the iPhone before, but am mostly blissfully unaware of how much work this will be :) None-the-less, goal is to have a prototype working in the back yard by July 15, 2024.  I’ll post a project link on this thread as soon as it’s up.  

I’m just working on requirements now.  My primary focus is on improving image quality, and capture efficiency vs. existing trail cameras for wildlife photography. For example:

  • Using camera image quality library to improve low light captures, exposure, etc. 
  • Improving trigger versatility and accuracy using LIDAR sensor
  • Tracking auto-focus based on LIDAR
  • Negative trigger delay for daylight shots
  • Support for custom lighting via “ensemble” sets

It seems wrong not to leverage cellular connectivity, though this is a lower priority for me because most of our sets are beyond cell phone coverage. 

I did find an interesting app – “Motion Detector Camera” by Phil Bailey   this app uses some parameterizable motion detection algorithm to trigger still images (free version). It’s pretty slick.  I’m starting w/ LIDAR because I want to have a trigger that works in the dark. 

Note that none of these require any AI processing of the images, though I have no doubt a smart phone would be a great place to do that one way or another.   Do you have specific usage models/requirements in mind for in-phone image processing/classification? 

See full post

Q&A: UK NERC £3.6m AI (image) for Biodiversity Funding Call - ask your questions here

In our last Variety Hour, Simon Gardner, Head of Digital Environment at NERC, popped in to share more about their open £3.6m funding call supporting innovation in tools for...

1 1

This is super cool! Me and @Hubertszcz and @briannajohns and several others are all working towards some big biodiversity monitoring projects for a large conservation project here in panama. The conservation project is happening already, but hubert starts on the ground work in January and im working on a V3 of our open source automated insect monitoring box to have ready for him by then.


I guess my main question would be if this funding call is appropriate/interested for this type of project? and what types of assistance are possible through this type of funding (researchers? design time? materials? laboratory field construction)

See full post