Group

Data management and processing tools / Feed

Conservation tech work doesn't stop after data is collected in the field. Equally as important to success is navigating data management and processing tools. For the many community members who deal with enormous datasets, this group will be an invaluable resource to trade advice, discuss workflows and tools, and share what works for you.

discussion

SURAKHSYA Portal for Human-Elephant Conflict - any updates? 

Hi everyone, I'm looking into proven systems for managing human-wildlife conflict, particularly focused on early warning systems. I'm keen to hear of any examples from your...

5 0

Ha - you're already in my thread, i've got your project in there, don't worry! 

But it's more I don't want proof of concept early R&D type projects that are just destined for a paper or a hobby project, I want to hear about projects that have some plan for usability and scaling so that other people can take and implement them. 

I think that my system is likely the closest thing you will find in terms of production ready and potential to scale as it once was a commercial system with complete over the air updates more than 10 years ago. It’s been in use by many people for more than 10 years and has used AI triggering since 2019. I’m pretty sure no other system can claim that.


So I have the system but you got me on the scalability because to do that you need funding. I don’t have the funding. If I had the funding I’d be doing it full time. But I’ve said enough now. So I’ll leave it at that.

This thread is off-topic in this conversation, so happy to continue it in the other one. However, just noting - your system is one example, but not the only one - there are certainly other early warning systems in varying stages of development, testing and roll out, and using different levels of technology (ai or otherwise). 

See full post
article

The Variety Hour: 2024 Lineup

You’re invited to the WILDLABS Variety Hour, a monthly event that connects you to conservation tech's most exciting projects, research, and ideas. We can't wait to bring you a whole new season of speakers and...

1
See full post
event

The Variety Hour: March 2024

Variety Hour is back! This month we're talking about making AI more accessible with Pytorch, new developments from WildMe and TagRanger, and working with geospatial data with Fauna & Flora. See you there!

1 10
See full post
discussion

Leveraging Actuarial Skills for Conservation Impact

Hello Wildlabs Community,I'm an experienced actuary with a deep passion for wildlife and conservation. With over 15 years in the insurance industry, I've honed my skills in data...

7 1
See full post
article

Navigating corporate due diligence in the Voluntary Carbon Market

Emerging trends for Nature-Based Solutions project assessments

3 1
Thank you for this article, Cassie. What is the pricing structure for Earth Blox/user/month?
Thanks, Cassie. How much is the annual license? I don't see it anywhere on your site.
See full post
discussion

Calculating Wingbeat Frequency From Accelerometer Data

Does anyone have any experience calculating WBF from ACC data? I'm trying to accomplish this in R. For the most part, I'm getting back pretty accurate number when going in to...

6 0

Great suggestion! Diving bird studies and their analyses are actually what has helped me get thus far with solving this problem. They happen to have done quite the same thing as I'm trying to do, just with more behaviors added. I believe the study was done with murres and kittiwakes.

 

Best,

Travis

I'm very close to solving the problem. Just waiting for a function to run on a fairly large dataset to see the results. I will share the repository link with you when it gets accomplished!

 

The species I'm working with roost atop cave ceiling as also drop from there to get airborne!

 

Yes, they are triaxial (Technosmart) and body mounted right on their backs.

 

So far, I have created thresholds for different metrics derived from the accelerometer data. Essentially, I sectioned out a bunch of ACC data where I am positive flight is occurring, and did the exact same with roosting, and crawling around/scratching(activity while roosting). From there, I plotted the distribution of all the metrics to see which metrics have unique distributions that were significantly different than roosting/activity.

Using those distributions, I created thresholds for the important metrics in which all flight behavior was either above or below a certain value for that metric. This got me to being able to construct a decision tree based on these metrics which had pretty solid accuracy.

 

The downside is a small chunk of flight from the beginning and end of flight bouts are not being included in the behavior classification. I noticed that their wbf during those small chunk are indicative of flight and am going to try and add wbf as the last decision on the tree to improve the accuracy of it.

 

VeDBA is also being included and calculated and based on the values for the thresholds I have created for flight it should not matter how high their head is, rather how low it is, when x y and z thresholds are also met. If that makes sense.

 

Hope I answered most of your questions!

Were you ever able to solve the problem? Interestingly enough, I begin a seal bio-logging study next year!

 

Also, you are correct. The errors were occurring during short bout flights as well as some spectral leakage, but I may have solved the problem by lower the window size. I've also corrected for the spectral leakage by creating a separate function that identifies any significant changes in calculated WBF that last < 2 seconds, then counts number of heave amplitudes within 1 second. I'm using an fft for the calculations and am just waiting for a function to run on a larger dataset to see if everything came out the way I am hoping for. Fingers crossed.

 

Best,

Travis

See full post
discussion

Image analysis with volunteers

Hello! I'm working with volunteers on a pilot project using camera traps and PAMs to monitor a mixed species waterbird colony on an Army Corps of Engineers constructed island....

2 0

I have a little experience with Timelapse and would say it is definetely worth the invested time.

The developer Saul Greenberg has made a ton of documentation on its use and is also very approachable in person, if you have any issues.

I can only highly recommend it.

 

 

See full post
discussion

Jupyter Notebook: Aquatic Computer Vision

Dive Into Underwater Computer Vision Exploration OceanLabs Seychelles is excited to share a Jupyter notebook tailored for those intrigued by the...

3 0

This is quite interesting. Would love to see if we could improve this code using custom models and alternative ways of processing the video stream. 

This definitely seems like the community to do it. I was looking at the thread about wolf detection and it seems like people here are no strangers to image classification. A little overwhelming to be quite honest 😂

While it would be incredible to have a powerful model that was capable of auto-classifying everything right away and storing all the detected creatures & correlated sensor data straight into a database - I wonder if in remote cases where power (and therefore cpu bandwidth), data storage, and network connectivity is at a premium if it would be more valuable to just be able to highlight moments of interest for lab analysis later? OR if you do you have cellular connection, you could download just those moments of interest and not hours and hours of footage? 

Am working on similar AI challenge at the moment. Hoping to translate my workflow to wolves in future if needed. 

We all are little overstretched but it there is no pressing deadlines, it should be possible to explore building efficient model for object detection and looking at suitable hardware for running these model on the edge. 

 

 

See full post
discussion

Need advice - image management and tagging 

Hello Wildlabs,Our botany team is using drones to survey vertical cliffs for rare and endangered plants. Its going well and we have been able to locate and map many new...

6 0

I have no familiarity with Lightroom, but the problem you describe seems like a pretty typical data storage and look up issue.  This is the kind of problem that many software engineers deal with on a daily bases.  In almost every circumstance this class of problem is solved using a database.

In fact, a potentially useful analysis is that the Lightroom database is not providing the feature set you need.

It seems likely that you are not looking for a software development project, and setting up you own DB would certainly require some effort, but if this is a serious issue for your work, you hope to scale your work up, or bring many other participants into your project, it might make sense to have an information system that better fits your needs.

There are many different databases out there optimized for different sorts of things.  For this I might suggest taking a look at MongoDB with GridFS for a couple of reasons.

  1. It looks like you meta data is in JSON format.  Many DBs are JSON compatible, but Mongo is JSON native.  It is especially good at storing and retrieving JSON data.  Its JSON search capabilities are excellent and easy to use.  It looks like you could export your data directly from Lightroom into Mongo, so it might be pretty easy actually.
  2. Mongo with the GridFS package is an excellent repository for arbitrarily large image files.
  3. It is straightforward to make a Mongo database accessible via a website.
  4. They are open source (in a manner of speaking) and you can run it for free.

Disclaimer: I used to work for MongoDB.  I don't anymore and I have no vested interest at all, but they make a great product that would really crush this whole class of problem.

See full post
discussion

How are Outdoor Fire Detection Systems Adapted for Small Forest Areas, Considering the Predominance of Indoor Fire Detectors?

How are fire detection mechanisms tailored for outdoor environments, particularly in small forest areas, given that most fire and smoke detectors are designed for indoor use?

1 0

Fire detection is a sort of broad idea.  Usually people detect the products of fire, and most often this is smoke.

Many home fire detectors in the US use a radioactive source and measure the absorption of the radiation by the air.  More smoke means more absorption.

For outdoor fire detection, PM2.5 can be a very good smoke proxy, and outdoor PM2.5 sensing is pretty accessible.

This one is very popular in my area. 

 

See full post
discussion

Wildlife Conservation for "Dummies"

Hello WILDLBAS community,For individuals newly venturing into the realm of Wildlife Conservation, especially Software Developers, Computer Vision researchers, or...

3 4

Maybe this is obvious, but maybe it's so obvious that you could easily forget to include this in your list of recommendations: encourage them to hang out here on WILDLABS!  I say that in all seriousness: if you get some great responses here and compile them into a list, it would be easy to forget the fact that you came to WILDLABS to get those responses.

I get questions like this frequently, and my recommended entry points are always (1) attend the WILDLABS Variety Hour series, (2) lurk on WILDLABS.net, and (3) if they express a specific interest in AI, lurk on the AI for Conservation Slack.

I usually also recommend that folks visit the Work on Climate Slack and - if they live in a major city - to attend one of the in-person Work on Climate events.  You'll see relatively little conservation talk there, but conservation tech is just a small subset of sustainability tech, and for a new person in the field, if they're interested in environmental sustainability, even if they're a bit more interested in conservation than in other aspects of sustainability, the sheer number of opportunities in non-conservation-related climate tech may help them get their hands dirty more quickly than in conservation specifically, especially if they're looking to make a full-time career transition.  But of course, I'd rather have everyone working on conservation!

Some good overview papers I'd recommend include: 

I'd also encourage you to follow the #tech4wildlife hashtags on social media! 


 

 

I'm also here for this. This is my first comment... I've been lurking for a while.

I have 20 years of professional knowledge in design, with the bulk of that being software design. I also have a keen interest in wildlife. I've never really combined the two; and I'm starting to feel like that is a waste. I have a lot to contribute. The loss of biodiversity is terrifying me. So I’m making a plan that in 2024 I’m going to combine both.

However, if I’m honest with you – I struggle with where to start. There are such vast amounts of information out there I find myself jumping all over the place. A lot of it is highly scientific, which is great – but I do not have a science background.

As suggested by the post title.. a “Wildlife Conservation for Dummies” would be exactly what I am looking for. Because in this case I’m happy to admit I am a complete dummy.

See full post
discussion

Opinions or experience with Firetail movement analysis software?

Hey everyone,Does anyone here have any experience or opinions on Firetail for processing/analyzing movement data? I have always used R with all of my movement data but I have been...

7 0

Hi Travis! 

I'm a developer in the Firetail team and also worked with R a lot during my PhD. 

The goals of both projects are quite different. Using Firetail definitely does not mean you can no longer use R or vice versa. Firetail's focus is on the interactive, visual exploration and annotation of your data. It is meant to be used by scientists, conservationists or stakeholders analysing their projects. 

It may be used to pinpoint regions/time-windows and visualize data suitable for downstream analysis in R, or generate reports regularily. Firetail won't replace algorithm X using a distinct set of parameters as required by reviewer R, but it will help to understand your data and tell the story.

The basic workflows of Firetail are meant to be intuitive and we seek to support a wide range of data out of the box (plus, 1:1 customer service when you run into problems). 
We also implement additional workflows based on ideas that we receive from you all and seek to integrate interfaces to whatever upstream/downstream tools you require for your daily work.

Feel free to contact me ([email protected]) for specific questions or just use this thread :)

Best,
Tobias

Hi Tobias!

 

This is great to hear. This seems to be exactly what I am looking for as I approach my accelerometry data, looking to identify certain behaviors through thresholds then manually verify. This sounds like a great compliment to what I've done in R with the data so far. Thanks for the info! I will most definitely give this a try!

I may take you up on the offer of emailing you with a couple quick questions once I start (I appreciate that!)

 

Best,

Travis

See full post
discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

7 0

Hi Lucy

As others have mentioned, camera trap temperature readouts are inaccurate, and you have the additional problem that the camera's temperature can rise 10C if the sun shines on it.

I would also agree with the suggestion of getting the moon phase data off the internet.

 

Do you need to do this for just one project?  And do you use the same camera make/model for every deployment?  Or at least a finite number of camera makes/models?  If the number of camera makes/models you need to worry about is finite, even if it's large, I wouldn't try to solve this for the general case, I would just hard-code the pixel ranges where the temperature/moon information appears in each camera model, so you can crop out the relevant pixels without any fancy processing.  From there it won't be trivial, exactly, but you won't need AI. 

You may need separate pixel ranges for night/day images for each camera; I've seen cameras that capture video with different aspect ratios at night/day (or, more specifically, different aspect ratios for with-flash and no-flash images).  If you need to determine whether an image is grayscale/color (i.e., flash/no-flash), I have a simple heuristic function for this that works pretty well.

Assuming you can manually define the relevant pixel ranges, which should just take a few minutes if it's less than a few dozen camera models, I would extract the first frame of each video to an image, then crop out the temperature/moon pixels.

Once you've cropped out the temperature/moon information, for the temperature, I would recommend using PyTesseract (an OCR library) to read the characters.  For the moon information... I would either have a small library of images for all the possible moon phases for each model, and match new images against those, or maybe - depending on the exact style they use - you could just, e.g., count the total number of white/dark pixels in that cropped moon image, and have a table that maps "percentage of white pixels" to a moon phase.  For all the cameras I've seen with a moon phase icon, this would work fine, and would be less work than a template matching approach.

FYI I recently wrote a function to do datetime extraction from camera trap images (it would work for video frames too), but there I was trying to handle the general case where I couldn't hard-code a pixel range.  That task was both easier and harder than what you're doing here: harder because I was trying to make it work for future, unknown cameras, but easier because datetimes are relatively predictable strings, so you know when you find one, compared to, e.g., moon phase icons.

In fact maybe - as others have suggested - extracting the moon phase from pixels is unnecessary if you can extract datetimes (either from pixels or from metadata, if your metadata is reliable).

camtrapR has a function that does what you want. i have not used it myself but it seems straightforward to use and it can run across directories of images:

https://jniedballa.github.io/camtrapR/reference/OCRdataFields.html

See full post