Group

Camera Traps / Feed

Looking for a place to discuss camera trap troubleshooting, compare models, collaborate with members working with other technologies like machine learning and bioacoustics, or share and exchange data from your camera trap research? Get involved in our Camera Traps group! All are welcome whether you are new to camera trapping, have expertise from the field to share, or are curious about how your skill sets can help those working with camera traps. 

discussion

Using "motion extraction" for animal identification

Hi all, I am no expert in the underlying machine learning models, algorithms, and AI-things used to identify animals in the various computer-vision tools out there (...

6 0

Hi Dhanu,

Our group moved to Wildlife Insights a few years back (for a few reasons but mostly ease of data upload/annotation by multiple users) so I haven't tried EcoAssist. This being said, I will look into it as a pre-WildlifeInsights filter to analyze the tens of thousands of images that get recorded when camera traps start to fail, or get confused with sun spots (which can be common at one of our sites, a south-facing slope with sparse canopy cover).

Thanks for sharing!

 

See full post
discussion

Wildlife Conservation for "Dummies"

Hello WILDLBAS community,For individuals newly venturing into the realm of Wildlife Conservation, especially Software Developers, Computer Vision researchers, or...

3 4

Maybe this is obvious, but maybe it's so obvious that you could easily forget to include this in your list of recommendations: encourage them to hang out here on WILDLABS!  I say that in all seriousness: if you get some great responses here and compile them into a list, it would be easy to forget the fact that you came to WILDLABS to get those responses.

I get questions like this frequently, and my recommended entry points are always (1) attend the WILDLABS Variety Hour series, (2) lurk on WILDLABS.net, and (3) if they express a specific interest in AI, lurk on the AI for Conservation Slack.

I usually also recommend that folks visit the Work on Climate Slack and - if they live in a major city - to attend one of the in-person Work on Climate events.  You'll see relatively little conservation talk there, but conservation tech is just a small subset of sustainability tech, and for a new person in the field, if they're interested in environmental sustainability, even if they're a bit more interested in conservation than in other aspects of sustainability, the sheer number of opportunities in non-conservation-related climate tech may help them get their hands dirty more quickly than in conservation specifically, especially if they're looking to make a full-time career transition.  But of course, I'd rather have everyone working on conservation!

Some good overview papers I'd recommend include: 

I'd also encourage you to follow the #tech4wildlife hashtags on social media! 


 

 

I'm also here for this. This is my first comment... I've been lurking for a while.

I have 20 years of professional knowledge in design, with the bulk of that being software design. I also have a keen interest in wildlife. I've never really combined the two; and I'm starting to feel like that is a waste. I have a lot to contribute. The loss of biodiversity is terrifying me. So I’m making a plan that in 2024 I’m going to combine both.

However, if I’m honest with you – I struggle with where to start. There are such vast amounts of information out there I find myself jumping all over the place. A lot of it is highly scientific, which is great – but I do not have a science background.

As suggested by the post title.. a “Wildlife Conservation for Dummies” would be exactly what I am looking for. Because in this case I’m happy to admit I am a complete dummy.

See full post
discussion

Testing Raspberry Pi cameras: Results

So, we (mainly @albags ) have done some tests to compare the camera we currently use in the AMI-trap with the range of cameras that are available for the Pi. I said in a thread...

9 0

And finally for now, the object detectors are wrapped by a python websocket network wrapper to make it easy for the system to use different types of object detectors. Usually, it's about 1/2 a day for me to write a new python wrapper for a new object detector type. You just need to wrap in the network connection and make it conform to the yolo way of expressing the hits, i.e. the json format that yolo outputs with bounding boxes, class names and confidence level.

What's more, you can even use multiple object detector models in different parts of a single captured image and you can cascade the logic to require multiple object detectors to match for example, or a choice from different object detectors.

It's the perfect anti-poaching system (If I say so myself :) )

Hey @kimhendrikse , thanks for all these details. I just caught up. I like your approach of supporting multiple object detectors and using the python websockets wrapper! Is your code available somewhere?

Yep, here:

Currently it only installs on older Jetsons as in the coming weeks I’ll finish the install code for current jetsons.


Technically speaking, if you were an IT specialist you could even make it work in wsl2 Ubuntu on windows, but I haven’t published instructions for that. If you were even more of a specialist you wouldn’t need wsl2 either. One day I’ll publish instructions for that once I’ve done it. Though it would be slow unless the windows machine had an NVidia GPU and you PyTorch work with it.

See full post
discussion

Open-Source design guide for a low-cost, long-running aquatic stereo camera

Katie Dunkley's project has been getting a heap of attention in the conservation tech community - she very kindly joined Variety Hour to give us a walkthrough of her Open-Source...

2 0

This is awesome - thanks for sharing Stephanie!! We actually were looking around for a low-cost video camera to augment an MPA monitoring project locally and this looks like a really great option!

 

Cheers, Liz

Thank you for sharing! Super interesting, as we don't see many underwater stereo cameras! We also use Blue Robotics components in our projects and have found them reliable and easy to work with. 

See full post
discussion

Apply to Beta test Instant Detect 2.0

Hi WildLabs,ZSL is looking for Beta testers for Instant Detect 2.0. If you are a conservationist, scientist or wildlife ranger with experience working with innovative...

1 2

Will you accept personal/hobbyist focused on conservation on their small plots of land (10-100 acres)?

I would, and know others, who would happily pay more than the official conservationists rate for the service, which could help to further subsidize the project. (Referring to your statement here: https://wildlabs.net/discussion/instant-detect-20-and-related-cost)

See full post
discussion

Mesh camera trap network?

Does anyone have something to share about wireless camera traps that make use of a mesh-network type of architecture. One such solution, BuckeyeCam allows cameras to route images...

24 0

Hi Sam,

Impressive!  Any chance the LoRa code is open source?  I should like to take a gander.

Thanks

See full post
discussion

Subsea DIY Burnwire for Deep-sea BRUVS

Hello everyone. I'm part of a team working on a low-cost, deep-sea camera (BRUVS) project and we're currently facing challenges with our subsea burnwire release system. We're...

9 0

Yeah from memory we found it difficult to get the relatively high voltage (~50VDC) and current (can't remember) in a small package, but we had almost no experience back then and gave up fairly quickly. We also found it difficult to get much help from the company if I remember correctly...

so is the problem with the nichrome waterproofing everything? I picture something like coating the nichrome in high temp grease (especially where it's in contact with the nylon line and the line itself) and encapsulating the entire thing in a semi-flexible silicone (so the line can slip out after detechment) with something buoyant to help pull it towards the surface maybe? Speaking of, how are tags being recovered (i.e. do they need to pop to the surface)? 

Hi Titus,

We've used this design/procedure for many years with our Deep Sea Camera systems, with good reliability.  Not OTS but not hard to make and most of the materials come out to be inexpensive per unit.  The most expensive item is the M101 connector ($25ea), but if you get them with extra length on the cable, you can essentially cut it off at the point where it joins the burn-loop and reuse that connector until it gets too short.  You'd also need an F101 connector integrated with your BRUV, this connecting with the burnwire and forming the the positive side of the circuit, and a ground - our ground connection goes to a large bolt on the frame near the burnwire loop - but that connector generally shouldn't need replacement unless it gets damaged.

These burnwires generally break in 3-7min, burning at about 1Amp, ~14.5V.  A thinner version of the coated wire could go faster or with less power required.

We do also employ galvanic releases as backups.  I really like redundancy on recovery mechanisms!  The ones we use are made by International Fishing Devices, Inc.  Various distributors sell certain models of their products (i.e. different time durations) but if you contact them directly, they can also make custom duration ones for you.

 

Hi Titus,

I've used latching solenoids as a release in a fresh water application. The product linked to is the one I have used, but has been discontinued (it's been quite a while).  Anyway these little devices hold a plunger in place with a permanent magnet, but release the plunger when a coil is energised that counters the magnet.  The holding force is not great, but more than enough to keep the safety on a mechanical trigger.  The whole device can be potted and sealed (ideally under vacuum to eliminate voids).  When pushing the plunger in to arm the solenoid, there is a definite click when the magnet kicks in, to confirm the locked state.

A similar device is the electropermanent magnet, which doesn't have a plunger, in fact it has no moving parts.  You provide the steel piece that that this device will release when energised, as with a latching solenoid. It generally has greater holding force than a latching solenoid. I've used these in a seawater application.  It's worth noting that there exist ferromagnetic stainless steels that can be used here to avoid corrosion.

Thanks,

-harold

See full post
discussion

Thermal cameras for monitoring visitors in highly vulnerable conservation areas

Hi everybody, Im Alex González, a consultant and researcher in sustainable tourism and conservation. I'm currently consulting a conservation organisation for the development...

7 0

Hi,



This is a really late answer but I was new to wildlabs then. I have a security appliance that uses state of the AI models and user defined polygon areas of interest that generates video alerts of intrusions in typically under a second.

Although its setup to install automatically on an NVidia AI on the edge boxes of your intentions were to monitor a great deal of cameras you could also install it on a desktop with a high end GPU for very high performance. At home I use a desktop with an rtx 2080ti and monitor around 15 cameras and a thermal imaging camera (old one).

I have also tested a high end model (yolov7) on a high end thermal imaging camera image and it works fine as well.

Thermal yolov7

Thermal imaging cameras are hellishly expensive though and I’ve found that new extremely light sensitive cameras like the HIKvision colorvu series almost obsoletes them in terms of people detection at night at a fraction of the cost.

If you are interested I’d be happy to show you a demo in a video meeting sometime if you like. I’m pretty sure it would meeting all your intrusion detection and alerting needs.

My project page is

See full post
discussion

Video camera trap analysis help

muh
Hello,I'm a complete newbie so any help would be appreciated. I never trained any ML models (have junior/mid experience with Python and R) nor annotated the data but would like to...

8 0

Hi there!, 

You should definitely check out VIAME, which includes a video annotation tool in addition to deep learning neural network training and deployment. It has a user friendly interface, has a publicly available server option that mitigates the need for GPU enabled computer for network training, and has an amazing support staff that help you with your questions. You can also download the VIAME software for local use. The tool was originally developed for marine life annotation, but can be used for any type of video or annotation (we are using it to annotate pollinators in video). Super easy to annotate as well. Worth checking out!  

Cheers, 
Liz Ferguson

See full post
discussion

Is anyone or platform supporting ML for camera trap video processing (id-ing jaguar)? 

Hi wildlabbers, I have another colleague looking for support for getting started using AI for processing videos coming out of their camera traps - specifically for species ID...

9 0

Hey there community! Im new here and looking after lots of answers too! ;-)

We are searching aswell for the most ideal App / AI technology to ID different cats, but also other mammals if possible 

- Panthera onca

- Leopardus wiedii

- Leopardus pardalis

 

and if possible:

 

- Puma concolor

- Puma yagouaroundi

- Leopardus colocolo

- Tapirus terrestris

 

Every recommendation is very welcome, thanks!

Sam

See full post
discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

7 0

Hi Lucy

As others have mentioned, camera trap temperature readouts are inaccurate, and you have the additional problem that the camera's temperature can rise 10C if the sun shines on it.

I would also agree with the suggestion of getting the moon phase data off the internet.

 

Do you need to do this for just one project?  And do you use the same camera make/model for every deployment?  Or at least a finite number of camera makes/models?  If the number of camera makes/models you need to worry about is finite, even if it's large, I wouldn't try to solve this for the general case, I would just hard-code the pixel ranges where the temperature/moon information appears in each camera model, so you can crop out the relevant pixels without any fancy processing.  From there it won't be trivial, exactly, but you won't need AI. 

You may need separate pixel ranges for night/day images for each camera; I've seen cameras that capture video with different aspect ratios at night/day (or, more specifically, different aspect ratios for with-flash and no-flash images).  If you need to determine whether an image is grayscale/color (i.e., flash/no-flash), I have a simple heuristic function for this that works pretty well.

Assuming you can manually define the relevant pixel ranges, which should just take a few minutes if it's less than a few dozen camera models, I would extract the first frame of each video to an image, then crop out the temperature/moon pixels.

Once you've cropped out the temperature/moon information, for the temperature, I would recommend using PyTesseract (an OCR library) to read the characters.  For the moon information... I would either have a small library of images for all the possible moon phases for each model, and match new images against those, or maybe - depending on the exact style they use - you could just, e.g., count the total number of white/dark pixels in that cropped moon image, and have a table that maps "percentage of white pixels" to a moon phase.  For all the cameras I've seen with a moon phase icon, this would work fine, and would be less work than a template matching approach.

FYI I recently wrote a function to do datetime extraction from camera trap images (it would work for video frames too), but there I was trying to handle the general case where I couldn't hard-code a pixel range.  That task was both easier and harder than what you're doing here: harder because I was trying to make it work for future, unknown cameras, but easier because datetimes are relatively predictable strings, so you know when you find one, compared to, e.g., moon phase icons.

In fact maybe - as others have suggested - extracting the moon phase from pixels is unnecessary if you can extract datetimes (either from pixels or from metadata, if your metadata is reliable).

camtrapR has a function that does what you want. i have not used it myself but it seems straightforward to use and it can run across directories of images:

https://jniedballa.github.io/camtrapR/reference/OCRdataFields.html

See full post
Link

Eliminatha, WiCT 2023 Tanzania

Passionate wildlife researcher and tech user, making strides in Grumeti, the heart of western Serengeti,Tanzania, using Camera Traps to gain priceless insights into the lives of this unique fauna and contributing greatly to understanding and preserving the Serengeti's ecosystems.

4
discussion

setting up a network of cameras connected to a server via WIFI

We need to set up a wildlife monitoring network based on camera traps in Doñana National Park, Spain (see alsowildlifeobservatory.org).  We are interested in setting...

12 0

Great discussion! Pet (and other 'home') cams are an interesting option as @antonab mentioned. I've been testing one at home that physically tracks moving objects (and does a pretty good job of it), connects to my home network and can be live previewed, all for AUD69 (I bought it on special. Normal retail is AUD80): 

On the Wifi front, and a bit of a tangent, has anyone done any work using 'HaLow' (see below for example) as it seems like an interesting way to extend Wifi networks?

Cool thread!

I will be testing Reolink Wi-Fi cameras in combination with solar powered TP-Link long range Wi-Fi antennas/repeaters later this field season for monitoring arctic fox dens at our remote off grid site in Greenland. The long range Wi-Fi antennas are rather power hungry but with sufficient solar panel and battery capacity I am hopeful it will work. 
I am looking forward to explore the links and hints above after the field season. 
Cheers,

See full post
discussion

Alternative to Reconyx Ultrafire

The Reconyx Ultrafire has been discontinued, and we rely on these heavily for research because of the modifiability and reliability in adverse conditions. We need a high quality...

1 0

The two cameras you mention below tick off most of the items in your requirements list.  I think the exception is the “timed start” whereby the camera would “wake up” to arm itself after a certain date.  Camlockbox.com provides security boxes for both.    

Especially if a white flash is useful in your research, you may also want to consider the GardePro T5WF.  I don’t have a lot of long-term experience with this camera, but it is one of the few that offers a white flash, and it has excellent battery life, especially for night captures.  The audio can be a little flaky   

I have done posts on these cameras, including a teardown.  See:

https://winterberrywildlife.ouroneacrefarm.com/2022/04/10/browning-spec-ops-elite-hp5-teardown/

https://winterberrywildlife.ouroneacrefarm.com/2022/09/26/inside-the-bushnell-core-ds-4k-trail-camera/

https://winterberrywildlife.ouroneacrefarm.com/2023/11/18/gardepro-t5wf-white-flash-trail-camera/

I have heard reports that the HP5 can let in moisture in very wet environments.  This may be a direct water contact type of thing, as we have never had water issues with this camera when it is installed in a lock box (US Northeast, Northwest).    

 

We prefer the HP5 due to superior image and audio quality.  That said, there is a known issue that with some HP5 cameras, with some fast (> 80 MB/s rated read) and large SD cards, the SD card can become corrupted, preventing the camera from capturing images.  I address this, including a fix via firmware, in another post:

https://winterberrywildlife.ouroneacrefarm.com/2023/11/16/fixing-browning-edge-elite-hp4-and-hp5-sd-card-corruption/

Hope this helps. 

-bob

See full post
discussion

Ideas for easy/fast maintenance of arboreal camera traps 

Hi ,A section of my upcoming project will include the deployment of arboreal camera traps up large fruiting trees In primary rainforest of PNG. It would be ideal if these camera...

9 1

I use the same wifi trick with Reolink solar cameras looking at tree cavities (Austrian mountain forest). You can even put the mobile router on  drone to get connection to the cameras. 

Hi Ben,

I would be interested to see if the Instant Detect 2.0 camera system might be useful for this.

The cameras can transmit thumbnails of the captured images using LoRa radio to a Base Station. You could then see all the captured images at this Base Station, as well as the camera's battery and memory information (device health). In addition, you could also change camera settings from the Base Station so you would not need to reclimb the trees to change from PIR sensitivity high to medium for instance.

The Instant Detect 2.0 cameras also have an external power port so a cable could be run to the ground to a DC 12V battery for long term power.

If you wanted to, you could also connect the Base Station to the Cloud using satellite connectivity, so that you can monitor the whole system remotely, but this will incur more cost and power usage of the Base Station.

I'd be keen to hear your thoughts,

Thanks,

Sam 

See full post
discussion

Instant Detect 2.0 and related cost

I am doing a research project on rhino poaching at Kruger National Park. I was impressed with the idea of Instant Detect 2.0. I do not know the cost involved with installing that...

3 0

Hi Kaarthika, hi all,

ZSL's Instant Detect 2.0 is currently undergoing Beta testing with external partners and so is still pre-production. We therefore do not have final pricing for the system. 

Saying this, we have got a manufacturing partner fully set-up who has already completed two full build rounds of the system, one in 2020 and another in 2023. This means we actually have a very good idea of the system's build costs and what these are likely to be when we can manufacture the system in volume.

While I cannot release this pricing yet, I am confident that we will have an unparalleled proposition.

In particular, the satellite airtime package we can supply to conservationists due to the generosity of the Iridium Satellite Company means that each system can send 3,600 (25-50KB) images a month from anywhere in the world for a single fixed fee. This equates to around a 97% discount to the normal commercial rates. 

We are currently very busy fundraising so that we can make this final step to scale the system. 

If we can secure this funding, we hope to go into volume production by mid-2024.

Best wishes,

Sam

See full post
discussion

DeepFaune: a software for AI-based identification of mammals in camera-trap pictures and videos

Hello everyone, just wanted to advertise here the DeepFaune initiative that I lead with Vincent Miele. We're building AI-based species recognition models for camera-trap...

6 4

Hello to all, new to this group. This is very exciting technology. can it work for ID of individual animals? we are interested in Ai for identifying individual jaguars (spots) and andean Bears (face characteristics). Any recommendation? contact? thanks!

German

That's a very interesting question and use case (I'm not from deepfaune). I'm playing with this at the moment and intend to integrate it into my other security software that can capture and send video alerts. I should have this working within a few weeks I think.

The structure of that software is that it is two stage, the first stage identifies that there is an animal and it's bounding box and then there's a classification stage. I intend to merge the two stages so that it behaves like a yolo model so that the output is bounding boxes as well as what type of animal it is.

However, my security software can cascade models. So if you were able to train a single stage classifier that identifies your particular bears, then you could cascade all of these models in my software to generate an alert with a video saying which bear it was.

Hi @GermanFore ,

I work with the BearID Project on individual identification of brown bears from faces. More recently we worked on face detection across all bear species and ran some tests with identifying Andean bears. You can find details in the paper I linked below. We plan to do more work with Andean bears in 2024.

I would love to connect with you. I'll send you a message with my email address.

Regards,

Ed

See full post
discussion

Modifying GoPro cameras to be IR sensitive.

Hi all,As a sensory ecologist interested in visually-mediated behaviors of nocturnal animals, a major struggle I encountered early in my career was how to monitor these creatures...

1 1

Hi Jay! 

Thanks for posting this here as well as your great presentation in the Variety Hour the other day!

Cheers!

See full post