Group

Software and Mobile Apps / Feed

The software and apps used and built by the conservation tech community are as varied as the species and habitats we work to protect. From fighting wildlife crime to collecting and analyzing data to engaging the general public with unique storytelling, apps, software, and mobile games are playing an increasingly large role in our work. Whether you're already well-versed in the world of software, or you're a hardware expert looking for guidance from the other side of the conservation tech field, this group will have interesting discussions, resources, and ideas to offer.

discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

7 0

Hi Lucy

As others have mentioned, camera trap temperature readouts are inaccurate, and you have the additional problem that the camera's temperature can rise 10C if the sun shines on it.

I would also agree with the suggestion of getting the moon phase data off the internet.

 

Do you need to do this for just one project?  And do you use the same camera make/model for every deployment?  Or at least a finite number of camera makes/models?  If the number of camera makes/models you need to worry about is finite, even if it's large, I wouldn't try to solve this for the general case, I would just hard-code the pixel ranges where the temperature/moon information appears in each camera model, so you can crop out the relevant pixels without any fancy processing.  From there it won't be trivial, exactly, but you won't need AI. 

You may need separate pixel ranges for night/day images for each camera; I've seen cameras that capture video with different aspect ratios at night/day (or, more specifically, different aspect ratios for with-flash and no-flash images).  If you need to determine whether an image is grayscale/color (i.e., flash/no-flash), I have a simple heuristic function for this that works pretty well.

Assuming you can manually define the relevant pixel ranges, which should just take a few minutes if it's less than a few dozen camera models, I would extract the first frame of each video to an image, then crop out the temperature/moon pixels.

Once you've cropped out the temperature/moon information, for the temperature, I would recommend using PyTesseract (an OCR library) to read the characters.  For the moon information... I would either have a small library of images for all the possible moon phases for each model, and match new images against those, or maybe - depending on the exact style they use - you could just, e.g., count the total number of white/dark pixels in that cropped moon image, and have a table that maps "percentage of white pixels" to a moon phase.  For all the cameras I've seen with a moon phase icon, this would work fine, and would be less work than a template matching approach.

FYI I recently wrote a function to do datetime extraction from camera trap images (it would work for video frames too), but there I was trying to handle the general case where I couldn't hard-code a pixel range.  That task was both easier and harder than what you're doing here: harder because I was trying to make it work for future, unknown cameras, but easier because datetimes are relatively predictable strings, so you know when you find one, compared to, e.g., moon phase icons.

In fact maybe - as others have suggested - extracting the moon phase from pixels is unnecessary if you can extract datetimes (either from pixels or from metadata, if your metadata is reliable).

camtrapR has a function that does what you want. i have not used it myself but it seems straightforward to use and it can run across directories of images:

https://jniedballa.github.io/camtrapR/reference/OCRdataFields.html

See full post
discussion

Data integration platforms

Anyone working in data integration or know of any other existing tools such as Palantir or Vulcan's EarthRanger?

10 0

Argos has an API 

Iridium data either arrives via email/server IP

Globalstar (unknown)
 


If you'd like the web services document for Argos, shoot me an email (tgray at woodsholegroup - dotcom).

See full post
discussion

TWS2023 - get in touch

Hi, I'll stick around at the TWS2023 Louisville with our friends from e-obs to do some live Firetail demos and discuss your ideas and requirements. Would be great to get in...

1 1

I'm registered with the TWS2023 app, so feel free to nudge me there as well

See full post
discussion

Payment for ecosystem services - mobile money case study?

Hello everyone! I'm writing an article for the GSMA's flagship annual report on the intersection of mobile money and payment for ecosystem services. I'm looking to...

4 1

The folks at AB Entheos in Nairobi are also looking at wild life damage insurance https://ab-entheos.co.ke/

See full post
discussion

DeepFaune: a software for AI-based identification of mammals in camera-trap pictures and videos

Hello everyone, just wanted to advertise here the DeepFaune initiative that I lead with Vincent Miele. We're building AI-based species recognition models for camera-trap...

6 4

Hello to all, new to this group. This is very exciting technology. can it work for ID of individual animals? we are interested in Ai for identifying individual jaguars (spots) and andean Bears (face characteristics). Any recommendation? contact? thanks!

German

That's a very interesting question and use case (I'm not from deepfaune). I'm playing with this at the moment and intend to integrate it into my other security software that can capture and send video alerts. I should have this working within a few weeks I think.

The structure of that software is that it is two stage, the first stage identifies that there is an animal and it's bounding box and then there's a classification stage. I intend to merge the two stages so that it behaves like a yolo model so that the output is bounding boxes as well as what type of animal it is.

However, my security software can cascade models. So if you were able to train a single stage classifier that identifies your particular bears, then you could cascade all of these models in my software to generate an alert with a video saying which bear it was.

Hi @GermanFore ,

I work with the BearID Project on individual identification of brown bears from faces. More recently we worked on face detection across all bear species and ran some tests with identifying Andean bears. You can find details in the paper I linked below. We plan to do more work with Andean bears in 2024.

I would love to connect with you. I'll send you a message with my email address.

Regards,

Ed

See full post
discussion

Software for recording field data?

Heya - I'm after modernising our field data collection process from paper records to directly digitised recording. I'm planning on getting 2-3 tablets, and am looking for apps...

11 0

I second Kobo/CyberTracker for tracks, but if you have money for an ArcGIS Online license, Survey123 is great (offline use - but can't record tracks)

See full post
careers

Conservation Systems Developer

New senior position in IUCN’s Science and Data Centre. Supporting, developing and maintaining the technological foundation that underpins The IUCN Red List of Threatened species and its underlying database.

0
See full post
article

MoveApps: A Digital Home for Tracking Data Analysis

MoveApps
In this Conservation Tech Showcase case study from 2022 Conservation Tech Award winner MoveApps, you’ll learn how they’re breaking new ground in animal movement research with tracking data analysis tools hosted by the...

1
See full post
discussion

Firetail 11 - GPS and sensor analysis

I'm very happy to announce  that Firetail 11 is now available, including an all-new website! (yay!)Version 11 features a great set of options for researchers,...

1 0

Hi Tobias! 

This sounds great and I am looking forward to trying it out after returning from field work! 

Very cool with the Vectronic Activity data! I am looking forward to check How we can use that! 

Cheers,

Lars

See full post