Group

Open Source Solutions / Feed

This group is a place to share low-cost, open-source devices for conservation; describe how they are being used, including what needs they are addressing and how they fit in to the wider conservation tech market; identify the obstacles in advancing the capacity of these technologies; and to discuss the future of these solutions - particularly their sustainability and how best to collaborate moving forward.

discussion

Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation (v1.0)

Welcome to Pytorch-Wildlife v1.0At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite, share, and...

9 3

Hello @hjayanto , You are precisely the kind of collaborator we are looking to work with closely to enhance the user-friendliness of Pytorch-Wildlife in our upcoming updates. Please feel free to send us any feedbacks either through the Github issue or here! We aim to make Pytorch-Wildlife more accessible to individuals with limited to no engineering experience. Currently, we have a Huggingface demo UI (https://huggingface.co/spaces/AndresHdzC/pytorch-wildlife) to showcase the existing functionalities in Pytorch-Wildlife. Please let us know if you encounter any issues while using the demo. We are also in the process of preparing a tutorial for those interested in Pytorch-Wildlife. We will keep you updated on this!

See full post
discussion

Passionate engineer offering funding and tech solutions pro-bono.

My name is Krasi Georgiev and I run an initiative focused on providing funding and tech solutions for stories with a real-world impact. The main reason is that I am passionate...

2 1

Hi Krasi! Greetings from Brazil!



That's a cool journey you've started! Congratulations. And I felt like theSearchLife resonates with the work I'm involved round here. In a nutshell, I live at the heart of the largest remaining of Atlantic forest in the planet - one of the most biodiverse biomes that exist. The subregion where I live is named after and bathed by the "Rio Sagrado" (Sacred River), a magnificent water body with a very rich cultural significance to the region (it has served as a safe zone for fleeing slaves). Well, the river and the entire bioregion is currently under the threat of a truly devastating railroad project which, to say the least is planned to cut through over 100 water springs! 



In face of that the local community (myself included) has been mobilizing to raise awareness of the issue and hopefully stop this madness (fueled by strong international forces). One of the ways we've been fighting this is through the seeking of the recognition of the sacred river as an entity of legal rights, who can manifest itself in court, against such threats. And to illustrate what this would look like, I've been developing this AI (LLM) powered avatar for the river, which could maybe serve as its human-relatable voice. An existing prototype of such avatar is available here. It has been fine-tuned with over 20 scientific papers on the Sacred River watershed.



And right now myself and other are mobilizing to manifest the conditions/resources to develop a next version of the avatar, which would include remote sensing capacities so the avatar is directly connected to the river and can possibly write full scientific reports on its physical properties (i.e. water quality) and the surrounding biodiversity. In fact, myself and 3 other members of the WildLabs community have just applied to the WildLabs Grant program in order to accomplish that. Hopefully the results are positive.



Finally, it's worth mentioning that our mobilization around providing an expression medium for the river has been multimodal, including the creation of a shortfilm based on theatrical mobilizations we did during a fest dedicated to the river and its surrounding more-than-human communities. You can check that out here:



 

https://vimeo.com/manage/videos/850179762



 

Let's chat if any of that catches your interest!

Cheers!

Hi Danilo. you seem very passionate about this initiative which is a good start.
It is an interesting coincidence that I am starting another project for the coral reefs in the Philipines which also requires water analytics so I can probably work on both projects at the same time.

Let's that have a call and discuss, will send you a pm with my contact details

There is a tech glitch and I don't get email notifications from here.

See full post
discussion

Monitoring setup  in the forest based on the wifi with 2.4 GHz frequency.

I am planning to setup the network using the wireless with frequency 2.4GHz. Can I get the the data for this signal distortion in the forest area?Is there any any special...

5 0

Hi Dilip,

I do not have data about signal distortion in a forest area and with the signal you are intended to use.

However, in a savannah environment, when I put a tour on the highest point of the park, Lora signal (avg 900MHz) is less distorted than WiFi signal (2.4GHz). This is normal as a physics law: the frequency determines the wave length, and the less the length (obviously the less the frequency), the less obstructed the signal.

So, without interfering with your design, I would say that in a forest configuration, WiFi will need more access points deployed and may be more costly, and in your context, even when using LoRa, you will need more gateways than I have in a savannah.

To design the approximate number of gateways, you may need to use terrain Visibility analysis.

To design the cameras deployment, you will need to comply with the sampling methods defined in your research. However, if it is on for surveillance reasons, you may need to rely on terrain visibility analysis also.

Best regards.

I've got quite a lot of experience with wireless in forested areas and over long(ish) ranges.

Using a wifi mesh is totally possible, and it will work.  You will likely not get great range between units.  You will likely need to have your mesh be fairly adaptable as conditions change.

Wireless and forests interact in somewhat unpredictable ways it turns out.  Generally, wireless is attenuated by water in the line-of-sight between stations.  From the Wifi perspective, a tree is just a lot of water up in the air.  Denser forest = more water = worse communications. LoRa @ 900Mhz is less prone to this issue than Wifi @ 2.4Ghz and way less prone than Wifi @ 5Ghz.  But LoRa is also fairly low data rate.  Streaming video via LoRa is possible with a lot of work, but video streaming is not at all what LoRa was build to do, and it does it quite poorly at best.

The real issue I see here is to do with power levels.  CCTV, audio streaming, etc are high data rate activities.  You may need quite a lot of power to run these systems effectively both for the initial data collection and then for the communications.

If you are planning to run mains power to each of these units, you may be better off running an ethernet cable as well.  Alternatively, you can run "power line" networking, which has remarkably good bandwidth and gets you back down to a single twisted pair for power and communications.

If you are planning to run off batteries and/or solar, you may need a somewhat large power system to support your application?

 

I would recommend going with Ubiquity 2.4Ghz devices which have performed relatively well in dense foliage of the California Redwood forests. It took a lot of tweaking to find paths through the dense tree cover as mentioned in the previous posts. 

 

See full post
discussion

How are Outdoor Fire Detection Systems Adapted for Small Forest Areas, Considering the Predominance of Indoor Fire Detectors?

How are fire detection mechanisms tailored for outdoor environments, particularly in small forest areas, given that most fire and smoke detectors are designed for indoor use?

1 0

Fire detection is a sort of broad idea.  Usually people detect the products of fire, and most often this is smoke.

Many home fire detectors in the US use a radioactive source and measure the absorption of the radiation by the air.  More smoke means more absorption.

For outdoor fire detection, PM2.5 can be a very good smoke proxy, and outdoor PM2.5 sensing is pretty accessible.

This one is very popular in my area. 

 

See full post
discussion

Open-source kinetic energy harvesting collar - Kinefox

Hello everyone,I ran across an article today (at the bottom) that talks about an open-source, kinetic energy harvesting collar ("Kinefox"). It sounds pretty neat...anyways,...

6 3

This is super cool! 

I was wondering if the development will further touch marine or aquatic animals, make it like water wheel (even might give burden to aerodynamic). Thank you for sharing!

Best,

Dhanu

See full post
discussion

Recycled & DIY Remote Monitoring Buoy

Hello everybody, My name is Brett Smith, and I wanted share an open source remote monitoring buoy we have been working on in Seychelles as part of our company named "...

2 1

Hello fellow Brett. Cool project. You mentioned a waterseal testing process. Is there documentation on that?

I dont have anything written up but I can tell what parts we used and how we tested.



Its pretty straightforward, we used this M10 Enclosure Vent from Blue Robotics:

 

Along with this nipple adapter:

Then you can use any cheap hand held break pump to connect to your enclosure. You can pump a small vacuum in and make sure the pressure holds.

Here's a tutorial video from blue robotics:

 





Let me know if you have any questions or if I can help out.

See full post
discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

7 0

Hi Lucy

As others have mentioned, camera trap temperature readouts are inaccurate, and you have the additional problem that the camera's temperature can rise 10C if the sun shines on it.

I would also agree with the suggestion of getting the moon phase data off the internet.

 

Do you need to do this for just one project?  And do you use the same camera make/model for every deployment?  Or at least a finite number of camera makes/models?  If the number of camera makes/models you need to worry about is finite, even if it's large, I wouldn't try to solve this for the general case, I would just hard-code the pixel ranges where the temperature/moon information appears in each camera model, so you can crop out the relevant pixels without any fancy processing.  From there it won't be trivial, exactly, but you won't need AI. 

You may need separate pixel ranges for night/day images for each camera; I've seen cameras that capture video with different aspect ratios at night/day (or, more specifically, different aspect ratios for with-flash and no-flash images).  If you need to determine whether an image is grayscale/color (i.e., flash/no-flash), I have a simple heuristic function for this that works pretty well.

Assuming you can manually define the relevant pixel ranges, which should just take a few minutes if it's less than a few dozen camera models, I would extract the first frame of each video to an image, then crop out the temperature/moon pixels.

Once you've cropped out the temperature/moon information, for the temperature, I would recommend using PyTesseract (an OCR library) to read the characters.  For the moon information... I would either have a small library of images for all the possible moon phases for each model, and match new images against those, or maybe - depending on the exact style they use - you could just, e.g., count the total number of white/dark pixels in that cropped moon image, and have a table that maps "percentage of white pixels" to a moon phase.  For all the cameras I've seen with a moon phase icon, this would work fine, and would be less work than a template matching approach.

FYI I recently wrote a function to do datetime extraction from camera trap images (it would work for video frames too), but there I was trying to handle the general case where I couldn't hard-code a pixel range.  That task was both easier and harder than what you're doing here: harder because I was trying to make it work for future, unknown cameras, but easier because datetimes are relatively predictable strings, so you know when you find one, compared to, e.g., moon phase icons.

In fact maybe - as others have suggested - extracting the moon phase from pixels is unnecessary if you can extract datetimes (either from pixels or from metadata, if your metadata is reliable).

camtrapR has a function that does what you want. i have not used it myself but it seems straightforward to use and it can run across directories of images:

https://jniedballa.github.io/camtrapR/reference/OCRdataFields.html

See full post
discussion

Experience with SeeedStudio T1000 as tracker and data logger.  

Hi Everyone. Recently, I got a chance to work work the SeeedStudio T1000 tracker and I made a tracker and data logger with it. It comes with a LoRa module to transmit the...

1 2

ooh very cool Salman! Amazing how much tracking devices have come down in price over the years and LoRa/LoRawan is just such a perfect fit for GPS data. Thanks heaps for sharing.

All the best,

Rob

See full post
event

AI for Eagles Challenge

In the AI for Eagles Challenge, by FruitPunch AI 50 AI enthusiasts and experts from all over the globe will be training classifiers to recognize golden and white-tailed eagles in flight and classify them into age groups...

0
See full post
article

New Add-ons for Mbaza AI

At Appsilon, we are always working to enable our users to get the most out of our solutions. With this in mind, we are happy to introduce two new add-ons to Mbaza AI. 

1
See full post
article

Sustained Effort: The Environmentalist’s Dilemma

Jacinta Plucinski
In this article from Jacinta Plucinski & Akiba of Freaklabs, they share advice on organising your thoughts around building short-term and long-term sustainability considerations into your conservation technology...

0
See full post