Group

AI for Conservation / Feed

Artificial intelligence is increasingly being used in the field to analyse information collected by wildlife conservationists, from camera trap and satellite images to audio recordings. AI can learn how to identify which photos out of thousands contain rare species; or pinpoint an animal call out of hours of field recordings - hugely reducing the manual labour required to collect vital conservation data.

discussion

WILDLABS AWARDS 2024 – MothBox

 We are incredibly thankful to WILDLABS and Arm for selecting the MothBox for the 2024 WILDLABS Awards.  The MothBox is an automated light trap that attracts and...

1 4

It's fun having these start running in the forests!

See full post
discussion

Drop-deployed HydroMoth

Hi all, I'm looking to deploy a HydroMoth, on a drop-deployed frame, from a stationary USV, alongside a suite of marine chemical sensors, to add biodiversity collection to our...

2 1

Hi Sol! This seems like an awesome project! I have a few questions in response: Where were you thinking of deploying this payload and for how long? 

Regarding hydromoth recorders, there have been several concerns that have popped up in my work with deploying the them at this depth because it's a contact type hydrophone which means it utilizes the case to transmit the sound vibrations of the marine soundscape to the microphone unlike the piezo element based hydrophones. 

  • At 30-60m you will likely have the case leak after an extended period of time if not immediately. The O-ring will deform at this depth, especially around the hinge of the housing. The square prism shape is not ideal for deep deployments you describe.  
  • After that depth and really starting at about 50m, a major concern is synthetic implosion from the small air pocket of the hydromoth not having a pressure release valve and lithium ion batteries getting exposed to salt water. This type of reaction would cause your other instruments to probably break or fail as well. 
  • You are unlikely to get a signal with a reinforced enclosure. The signal is generated via the material and geometry of the housing. The plastic will probably deform and mess with your frequency response and sound to noise ratio. If you place it against metal, it will dampen the sound quite a lot. We tried to do this, but the sensitivity is quite low with a large amount of self noise. 
  • A side not: for biodiversity assessments, the hydromoth is not characterized and is highly directional, so you wouldn't be able to compare sites through your standard aocustic indices like ACI and SPL.  

    That said if you are deploying for a short time, a hydrophone like an Aquarian H1a attached through a penetrator of a blue robotics housing that contains a field recorder like a zoom recorder may be optimal for half a day and be relatively cheaper than some of the other options. You could also add another battery pack in parrallel for a longer duration. 

     

Hi Matthew,

Thanks for your advice, this is really helpful!

I'm planning to use it in a seagrass meadow survey for a series of ~20 drops/sites to around 30 m, recording for around 10 minutes each time, in Cornwall, UK.

At this stage I reckon we won't exceed 30 m, but based on your advice, I think this sounds like not the best setup for the surveys we want to try.

We will try the Aquarian H1a, attached to the Zoom H1e unit, through a PVC case. This is what Aquarian recommended to me when I contacted them too.

Thanks for the advice, to be honest the software component is what I was most interested in when it came to the AudioMoth- is there any other open source software you would recommend for this?

Best wishes,

Sol
 

See full post
discussion

Has anyone combined flying drone surveys with AI for counting wild herds?

My vision is a drone, the kind that fly a survey pattern. Modern ones have a 61 megapixel camera and LIDAR which is a few millimeter resolution, they are for mapping before a road...

1 0

Hi Johnathan!

Here are a few examples where UAVs and AI has been used to spot animals. 

https://www.mdpi.com/2504-446X/7/3/179#:~:text=These%20vehicles%20follow%20flight%20plans,accurate%20than%20traditional%20counting%20approaches.

https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2656.13904

A google scholar search as this will find many more:

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=uav+animals+wildlife+AI+computer+vision&btnG=

One thing often forgotten when considering UAVs for aerial surveys like these are that maximum height above ground is normally about 100-120m. This really limits the area one can cover.

Cheers,

Lars

See full post
discussion

WILDLABS AWARDS 2024 - BumbleBuzz: automatic recognition of bumblebee species and behaviour from their buzzing sounds 

The 'BumbleBuzz' team (@JeremyFroidevaux, @DarrylCox, @RichardComont, @TBFBumblebee, @KJPark, @yvesbas, @ilyassmoummad, @nicofarr) is very pleased to have been awarded the...

1 3

Super great to see that there will be more work on insect ecoacoustics! So prevalent in practically every soundscape, but so often over-looked. Can't wait to follow this project as it develops!

See full post
discussion

WILDLABS AWARDS 2024 - No-code custom AI for camera trap species classification

We're excited to introduce our project that will enable conservationists to easily train models (no code!) that they can use to identify species in their camera trap images.As we...

5 4

Congratulations with the grant! I am looking much forward to seeing the result of your project!

Hi Michelle! Right now we're focused on species identification rather than counts of animals.

When you say timelapse images, is this a certain format like bursts? Curious to understand more about your data format

Happy to explain for sure. By Timelapse I mean images taken every 15 minutes, and sometimes the same seals (anywhere from 1 to 70 individuals) were in the image for many consecutive images. 

See full post
article

Completely irrational animals...

Article from Ars Technica about how difficult it is to detect and avoid kangaroos...

3 2
Thanks for the info Rob! Lot´s of research going on in this field. After decades of trying to warn the animals (first red reflectors which are not working with ungulates and now...
The idea is not new. It has been tested twenty years ago:short english summary: https://...
Hi Robin,Thanks for all that information! Yes the 'at-grade' crossing idea (the second message article summary you linked to) is a really good one I think. Going to try and...
See full post
discussion

AI for Conservation!

Hi everybody!I just graduated in artificial intelligence (master's) after a bachelor's in computer engineering. I'm absolutely fascinated by nature and wildlife and I'm trying to...

7 5

Welcome, Have you considered participating in any of the AI for Good challenges. I find it is good way to build a nice portfolio of work. Also contributing to existing open source ML projects such as megadetector or to upstream libraries such as PyTorch is good way to getting hired. 

 

 

 

We could always use more contributors in open source projects. In most open source companies Red Hat, Anaconda, Red Hat and Mozilla, people often ended up getting hired largely due to their contributions on open source projects. These contributions were both technical such as writing computer code and non-technical such as writing documentation and translating tools in their local language. 

 

See full post
article

The Variety Hour: 2024 Lineup

You’re invited to the WILDLABS Variety Hour, a monthly event that connects you to conservation tech's most exciting projects, research, and ideas. We can't wait to bring you a whole new season of speakers and...

1
See full post
event

Catch up with The Variety Hour: March 2024

Variety Hour is back! This month we're talking about making AI more accessible with Pytorch, new developments from WildMe and TagRanger, and working with geospatial data with Fauna & Flora. See you there!

3 10
Unfortunately, I can't be there. When will you upload the recording?    
See full post
discussion

BirdWeather | PUC

Hi Everyone,I just found out about this site/network!I wanted to introduce myself - I'm the CEO of a little company called Scribe Labs.  We're the small team behind...

3 6

I love the live-stream pin feature!

Hi Tim, I just discovered your great little device and about to use it for the first time this weekend. Would love to be directly in touch since we are testing it out as an option to recommend to our clients :) Love that it includes Australian birds! Cheers Debbie

Hi @timbirdweather I've now got them up and running and winding how I can provide feedback on species ID to improve the accuracy over time. It would be really powerful to have a confirmation capability when looking at the soundscape options to confirm which of the potential species it actually is or confirm it is neither to help develop the algorithms.

Also, is it possible to connect the PUC to a mobile hotspot to gather data for device that isn't close to wifi? And have it so that it can detect either wifi or hotspot when in range? Thanks!

See full post
discussion

Labelled Terrestrial Acoustic Datasets

Hello all,I'm working with a team to develop an on-animal acoustic monitoring collar. To save power and memory, it will have an on board machine learning detector and classifier...

13 0

Thanks for sharing Kim.

We're using <1 mA while processing, equating to ~9 Ah running for a year. The battery is a Tadiran TL-5920 C 3.6V Lithium, providing 8.6 Ah, plus we will a small (optional) solar panel. We also plan to implement a threshold system, in which the system is asleep until noise level crosses a certain threshold and wakes up.

The low-power MCU we are using is https://ambiq.com/apollo4/ which has a built-in low power listening capability.

<1 mA certainly sounds like a breakthrough for this kind of device. I hope you are able to report back  with some real world performance information about your project @jcturn3 . Sounds very promising. Will the device run directly off the optional solar cell or will you include a capacitor since you cannot recharge the lithium thionyl chloride cell. I had trouble obtaining the Tadarian TL-5920 cells in Australia (they would send me old SL-2770s though) so I took a gamble on a couple of brands of Chinese cells (EVE and FANSO) which seemed to perform the same job without a hitch. Maybe in the USA you can get Israeli cells more easily than Chinese ones? 

Message me if you think some feeding sounds, snoring, grooming and heart sounds of koalas would be any use for your model training.

Really interesting project. Interesting chip set you found. With up to around 2mb sram that’s quite a high memory for a  ultra low power soc I think.

It might also be interesting while doing your research thinking about if there are any other requirements people could have for such a platform with a view towards more mass usage later. Thanks for sharing.

See full post
discussion

Successfully integrated deepfaune into video alerting system

Hi all, I've successfully integrated deepfaune into my Video alerting full-features security system StalkedByTheState. The yellow box around the image represents the zone of...

14 0

As I understand it, the deepfaune's first pass is an object detector was based on megadetector, @schamaille  could explain it exactly. In short though, it's output is standard yolo like in terms of properties. From this I use standard opencv code to snip out the individual matches and pass them to the second stage, which is a classifier.

My code needs a bit of cleaning up before I can release it, also it needs to be made more robust  for some situations. Also, I'm waiting to hear if I got anywhere with wildlab awards as it would affect my plans going forward. And this could be anything up till the end of next month, though at a wild guessing I'm guessing next week at the UN WWD or at the wildlabs get together :) Anyone else have any theories ?

Also, my code is a little  more complex because I abstract the interface to a network based API.

Finally, I don't want to take the wind out of my sails, I would like to launch my integration in time with the release of the Orin based version of my StalkedByTheState software, the usage of which I'm trying to promote. To release earlier take's some of the oomph out of this.

But maybe we can have a video call sometime and we can have a chat about this?

In the DeepFaune final paper, it's mentioned that the team developed their own observation type model (detector) based on YOLOv8s, utilizing the cropping information provided by MegaDetectorV5a.

Therefore, for the initial phase, I'm also utilizing the YOLO interface (from Ultralytics) to load the deepfaune-yolov8s_960.pt model and perform the prediction procedure. The results list contains one or more bounding boxes with class ID (animal, person, vehicle) and probability values.

For each object detection, I crop and resize the original image to the area of the bounding box, execute the preprocessImage transformation, and utilize the predictOnBatch method (both from the Classifier class which load deepfaune-vit_large_patch14_dinov2.lvd142m.pt in the background) to obtain scores for species-level classification for each individual bounding box.

This approach could prove valuable to other users seeking to integrate two-step DeepFaune detection and classification into their pipelines or APIs.

Absolutely! I pretty much do the same thing, the resizing step I think relates to what I still have to do. Some large images caused my code to crash.

I want to take it one step further, and that's one of the reasons I want to talk to Microsoft about, I'd like to encourage the abstraction of the object detection with the network API approach I developed as that would mean that any new models anyone developed would simply work out of the box with no additional work with my video alerting software. To that end I need to have a chat to see if they agree with the added value, if so they could potentially add this wrapper around their code and all of those models would be available to alert on and to use is simple Python scripts in other peoples pipelines.

Anyway. That's the plan.

See full post
discussion

Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation (v1.0)

Welcome to Pytorch-Wildlife v1.0At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite, share, and...

9 4

Hello @hjayanto , You are precisely the kind of collaborator we are looking to work with closely to enhance the user-friendliness of Pytorch-Wildlife in our upcoming updates. Please feel free to send us any feedbacks either through the Github issue or here! We aim to make Pytorch-Wildlife more accessible to individuals with limited to no engineering experience. Currently, we have a Huggingface demo UI (https://huggingface.co/spaces/AndresHdzC/pytorch-wildlife) to showcase the existing functionalities in Pytorch-Wildlife. Please let us know if you encounter any issues while using the demo. We are also in the process of preparing a tutorial for those interested in Pytorch-Wildlife. We will keep you updated on this!

See full post