AI for Conservation / Feed

Artificial intelligence is increasingly being used in the field to analyse information collected by wildlife conservationists, from camera trap and satellite images to audio recordings. AI can learn how to identify which photos out of thousands contain rare species; or pinpoint an animal call out of hours of field recordings - hugely reducing the manual labour required to collect vital conservation data.


Has anyone combined flying drone surveys with AI for counting wild herds?

My vision is a drone, the kind that fly a survey pattern. Modern ones have a 61 megapixel camera and LIDAR which is a few millimeter resolution, they are for mapping before a road...

22 0

It certainly is a beautiful dream. One thing to think about. You can have high res images but usually higher res means smaller sensors per pixel and then exposure time comes into play and you don’t want any motion blur.

The ai would also have to be trained from any over head perspective.

Testing exposure time and so can of course be tested from a normal plane. Sounds like something you could and should do at an early stage.

That would be an ideal application for the invasive species area monitor, because you can cover so much ground. A truck with 4wd and a couple of hydrogen bottles could cover half the province. 

And here is an idea that will develop in parallel. I happen to live near a bunch of greenhouses. Do you think your raspberry pi application could operate a drone inside a greenhouse? I mean could it be taught to recognize a flying insect, in the nicely constrained greenhouse environment? For example there's only a few kinds of bugs in there I bet. 

The drone also has one of those small vacuum cleaners, like a Dyson stick. Then it needs to guide the drone to where the vacuum can grab the flying moth or pest. If I could get a few of those flying I could maybe pay for phase 2. 

See full post

Thoughts on new MSc in Conservation Technology

Hello everyone, We are in the process of developing a new MSc in Conservation Technology at my university and would welcome your feedback. If you would be willing to give...

1 0

Hi @emmahiggins

sounds like a great plan. 

Could you tell more about the content of the program, and perhaps the institutional context (which department(s) is(are) going to offer the course, and which research programs or projects are related to it) , so we'll have something to orient our thoughts?

See full post

Indigenous communities and AI for Conservation

Hello! I am looking for recommendations for people from indigenous communities who are either using AI or exploring the potential of AI to solve conservation problems...

5 2

I am also commenting for future notifications - very interested to hear some responses.

While not directly related to AI, here in Canada there's quite a conversation around data sovereignty for Indigenous communities, such as OCAP (Ownership, Control, Access, Possession) which may be able to connect you with some of the big players in this part of the world. There is also a few efforts to incorporate more Indigenous knowledge systems in statistical modelling which may be of interest:

See full post

Mass Detection of Wildlife Snares Using Airborne Synthetic Radar

Mass Detection of Wildlife Snares Using Airborne Synthetic RadarFor the last year my colleauges Prof. Mike Inggs (Radar - Electrical Engineering, Unviversity of Cape Town) and...

21 4

Yes, it is really important to distinguish "noise" from real snares. 

Having rangers respond to false positives will really be detrimental to the whole project. Too many false positive resulting in them going out and not finding snare will in the long term mean that they will not respond to distant snare alerts, assuming that they might just be metal cans etc.

Classification of targets will be dependent on the interaction of the target with the 4 polarizations of the radio waves of the radar signal and the certainty of classification will be displayed eg. :

Target-Snare; Location: -31.71130, 24.56327; Classification Accuracy: 99%, Time Detected: 08:53 

Target-Bicycle; Location: -31.71130, 24.56327; Classification Accuracy: 32%, Time Detected: 08:55

Target-Chainsaw; Location: -31.71130, 24.56327; Classification Accuracy: 40%, Time Detected: 08:53 

Target-Aluminum can; Location: -31.71130, 24.56327; Classification Accuracy: 80%, Time Detected: 08:55

These detections will be sent as alerts to rangers, EarthRanger will monitor the response to them and what the rangers found and exactly where, together with an uploaded photograph of what was found. These will be fed into the detection and classification algorithms to result in a constant improvement  of detection and classification under different circumstances


Thank you so much for your support. I am finding it really difficult to find the funding for the initial development. We need lots of engineering time to refine our detection and trial it in ever more complex habitats. We really need money for a well-qualified electronic engineer competent in signal processing to work on this full-time as my PhD student, has to hold down a full-time job as Radar lead for a satellite company.

Happy to help out with the processing of the SAR images and building a model on top of it. 

See full post

Successfully integrated deepfaune into video alerting system

Hi all, I've successfully integrated deepfaune into my Video alerting full-features security system StalkedByTheState. The yellow box around the image represents the zone of...

18 0

Hi Thijs, the use of that inflatable device to scare off bears suggests that the location you are using it has significant power available.

Is this a common situation for the places in Romania that have bear trouble ? Because I think your other systems were running off batteries is that correct ?

Yes, this system is designed to be installed near farms. We also have the repeller system with audio & light, that is battery & solar powered. This system is a "last line of defence". The blowers alone requires 1000 watts :)

See full post

Harnessing large language models for coding, teaching and inclusion to empower research in ecology and evolution

Check out this paper that reviews the current state of AI in conservation.


ChatGPT for conservation

Hi, I've been wondering what this community's thoughts are on ChatGPT? I was just having a play with it and asked:"could you write me a script in python that loads photos and...

47 11

In my experience, ChatGPT-4 performs significantly better than version 3.5, especially in terms of contextual understanding. However, like any AI model, inaccuracies cannot be completely eliminated. I've also seen a video showing that Gemini appears to excel at literature reviews, though I haven't personally tested it yet. Here's the link to the video:

While GPT3.5 is good for some activities, GPT-4 and GPT4-turbo are much better. Anthropic Claude is also very good, on a par with GPT4 for many tasks. As someone else has mentioned, the key is in the prompt you use, though chatGPT is continually being extended to allow more contextual information to be included, for example external files that have been uploaded previously. Code execution and image generation are also possible with the paid version of chatGPT, and the latest models include data up to the end of 2023 (I think). You can also include calls to openAI or other APIs programatically to include these in your workflows for assisting with a variety of tasks.
Regarding end results - as always, we're responsible for whatever outputs are ultimately published/shared etc. 
For Conservation Evidence - you could try making your own GPT (chatGPT assistant) that can be published/shared using your own evidence base and prompt that should be well grounded and provide good responses (I should think). But don't use 3.5 for that, IMO.

Undoubted things will quickly evolve from just "straight" ChatGPTn, BARD, ClaudeAI, etc "standard" models, to more specialized Retrieval Augmentation Generation (RAG) , where facts from authoritative sources and rules are supplied as context for the LLM to summarize in its response. You can direct ChatGPT and BARD: "Your response must be based on the reference sections provided" up to a few K of tokens.  A huge amount of work is going into properly indexing reference materials in order to supply context to the reference models.  Folks like FAO and CGIAR are indexing all their agricultural knowledge to feed the standard ones with location, crop, livestock, etc specialty "knowledge" to provide farmers automated advice via mobile phones, etc. I can totally see the same for such mundane things as "how do I ... using ArcMAP or QGIS?" purely based on the vast amount of documentation and tutorials. Google, ChatGPT, etc do a really good job already; this is just totally focusing its response to the body of knowledge known in advance to be relevant.

I would highly recommend folks do some searching on  "LLM RAG" - that's what going nuts now across the board.

Then there's stuff  I like to call "un-SQL" ... unstructured query language .. that will take free-form queries to form SQL queries, with supporting visualization code.



As far as writing and evaluating proposals, I saw a paper on how summarization of public review forms are being developed in several cities. 
see: ""

And that's just the standard LLMs; super-specialized LLMs based on Facebook Llama are being built purely based on domain-specific bodies of dialog - medical, etc.  LOTS of Phds to be done.

I think what will be critical in all this are strong audit trails and certification mechanisms to gain trust. Especially when it comes to deceptive simple terms like "best" 


See full post

AI & Gamified Citizen Science

Hi everyone. I have been developing an idea for a gamified citizen science platform. It will leverage machine learning, gamified principles, GIS and collective citizen science to...

2 0

Check out FathomVerse, a new game by MBARI folks for involving citizen scientists in improving algorithms to ID deep sea critters!

See full post

Drop-deployed HydroMoth

Hi all, I'm looking to deploy a HydroMoth, on a drop-deployed frame, from a stationary USV, alongside a suite of marine chemical sensors, to add biodiversity collection to our...

4 1

Hi Matthew,

Thanks for your advice, this is really helpful!

I'm planning to use it in a seagrass meadow survey for a series of ~20 drops/sites to around 30 m, recording for around 10 minutes each time, in Cornwall, UK.

At this stage I reckon we won't exceed 30 m, but based on your advice, I think this sounds like not the best setup for the surveys we want to try.

We will try the Aquarian H1a, attached to the Zoom H1e unit, through a PVC case. This is what Aquarian recommended to me when I contacted them too.

Thanks for the advice, to be honest the software component is what I was most interested in when it came to the AudioMoth- is there any other open source software you would recommend for this?

Best wishes,


Hey Sol, 

No problem at all. Depending on your configuration, the Audiomoth software would have to work on a PCB with an ESP32 chip which is the unit on the audiomoth/hydromoth, so you would have to make a PCB centered around this chip. You could mimic the functionality of the audiomoth software on another chip, like on a raspberry pi with python's pyaudio library for example. The problem you would have is that the H1A requires phantom power, so it's not plug and play. I'm not too aware with the H1e, but maybe you can control the microphone through the recorder that is programmable through activations by the RPi (not that this is the most efficient MCU for this application, but it is user friendly). A simpler solution might be to just record continuously and play a sound or take notes of when your 10 min deployment starts. I think it should last you >6 hours with a set of lithium energizer batteries. You may want to think about putting a penetrator on the PVC housing for a push button or switch to start when you deploy. They make a few waterproof options. 

Just somethign else that occured to me, but if you're dropping these systems, you'll want to ensure that the system isn't wobbling in the seagrass as that will probably be all you will hear on the recordings, especially if you plan to deploy shallower. For my studies in Curacao, we aim to be 5lbs negative, but this all depends on your current and surface action. You might also want to think about the time of day you're recording biodiversity in general. I may suggest recording the site for a bit (a couple days or a week) prior to your study to see what you should account for (e.g. tide flow/current/anthropogenic disturbance) and determine diel patterning of vocalizations you are aiming to collect if subsampling at 10 minutes. 



Hi Sol,

If the maximum depth is 30m, it would be worth experimenting with HydroMoth in this application especially if the deployment time is short. As Matt says, the air-filed case means it is not possible to accurately calibrate the signal strength due to the directionality of the response. For some applications, this doesn't matter. For others, it may.

Another option for longer/deeper deployments would be an Aquarian H2D hydrophone which will plug directly into AudioMoth Dev or AudioMoth 1.2 (with the 3.5mm jack added). You can then use any appropriately sized battery pack.

If you also connect a magnetic switch, as per the GPS board, you can stop and start recording from outside the housing with the standard firmware.


See full post

AI-enabled image query system

Online citizen science platforms like iNaturalist and Macaulay Library contain a wealth of images but are hard to search using text. We are looking for ideas so we can develop the...

See full post

Introducing The Inventory!

The Inventory is your one-stop shop for conservation technology tools, organisations, and R&D projects. Start contributing to it now!

5 14
This is fantastic, congrats to the WildLabs team! Look forward to diving in.
Hi @JakeBurton,thanks for your great work on the Inventory!Would it be possible to see or filter new entries or reviews?Greetings from Austrian forest,Robin 
See full post

AI for wolf ID

We're seeking training data for AI for wolf ID - we at T4C manage 3 Wildbook platforms: Wild North, Whiskerbook and the African Carnivore Wildbook (ACW).  ...

See full post

AI volunteer work

Hello All, I have recently joined this group and going through the current feeds and discussions i already feel that its the right group i'm search for sometime.I'm a software...

1 0

Hi Phani,

An entry point might be to participate in a challenge related to conservation on:

You could also reach out to a conservation organization (e. g. WWF or something smaller and more local) and ask them directly whether there's an opportunity for you to volunteer, perhaps even suggest an idea and maybe they find it useful.

I hope you find an opportunity you're looking for!

See full post

MegaDetector v5 release

Some folks here have previously worked with our MegaDetector model for categorizing camera trap images as person/animal/vehicle/empty; we are excited to announce MegaDetector...

4 9

Hi @dmorris,

might you have encountered this issue while working with Mega detector v5?

The conflict is caused by:
pytorchwildlife depends on torch==1.10.1
pytorchwildlife depends on torch==1.10.1
pytorchwildlife depends on torch==1.10.1


if yes what solution helped?

See full post

Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation (v1.0)

Welcome to Pytorch-Wildlife v1.0At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite, share, and...

11 5

Hi everyone! @zhongqimiao was kind enough to join Variety Hour last month to talk more about Pytorch-Wildlife, so the recording might be of interest to folks in this thread. Catch up here: 

Hi @zhongqimiao ,

Might you have faced such an issue while using mega detector

The conflict is caused by:
pytorchwildlife depends on torch==1.10.1
pytorchwildlife depends on torch==1.10.1
pytorchwildlife depends on torch==1.10.1


if yes how did you solve it, or might you have any ideas?

torch 1.10.1 doesn't seem to exist

See full post

WILDLABS AWARDS 2024 - No-code custom AI for camera trap species classification

We're excited to introduce our project that will enable conservationists to easily train models (no code!) that they can use to identify species in their camera trap images.As we...

7 5

Happy to explain for sure. By Timelapse I mean images taken every 15 minutes, and sometimes the same seals (anywhere from 1 to 70 individuals) were in the image for many consecutive images. 

Got it. We should definitely be able to handle those images. That said, if you're just looking for counts, then I'd recommend running Megadetector which is an object detection model and outputs a bounding box around each animal.

Hi, this is pretty interesting to me. I plan to fly a drone over wild areas and look for invasive species incursions. So feral hogs are especially bad, but in the Everglades there is a big invasion of huge snakes. In various areas there are big herds of wild horses that will eat themselves out of habitat also, just to name a few examples. Actually the data would probably be useful in looking for invasive weeds, that is not my focus but the government of Canada is thinking about it.

Does your research focus on photos, or can you analyze LIDAR? I don't really know what emitters are available to fly over an  area, or which beam type would be best for each animal type. I know that some drones carry a LIDAR besides a camera for example. Maybe a thermal camera would be best to fly at night.

See full post


 We are incredibly thankful to WILDLABS and Arm for selecting the MothBox for the 2024 WILDLABS Awards.  The MothBox is an automated light trap that attracts and...

7 5

Already an update from @hikinghack

Yeah we got it about as bare bones as possible for this level of photo resolution and duration in the field. The main costs right now are:


Pi- $80

Pijuice -$75

Battery - $85

64mp Camera - $60

which lands us at $300 already. But we might be able to eliminate that pijuice and have fewer moving parts, and cut 1/4 of our costs! Compared to something like just a single logitech brio camera that sells for $200 and only gets us like 16mp, we are able to make this thing as cheap as we could figure out! :)

See full post