With new technologies revolutionizing data collection, wildlife researchers are becoming increasingly able to collect data at much higher volumes than ever before. Now we are facing the challenges of putting this information to use, bringing the science of big data into the conservation arena. With the help of machine learning tools, this area holds immense potential for conservation practices. The applications range from online trafficking alerts to species-specific early warning systems to efficient movement and biodiversity monitoring and beyond.
However, the process of building effective machine learning tools depends upon large amounts of standardized training data, and conservationists currently lack an established system for standardization. How to best develop such a system and incentivize data sharing are questions at the forefront of this work. There are currently multiple AI-based conservation initiatives, including Wildlife Insights and WildBook, that are pioneering applications on this front.
This group is the perfect place to ask all your AI-related questions, no matter your skill level or previous familiarity! You'll find resources, meet other members with similar questions and experts who can answer them, and engage in exciting collaborative opportunities together.
Just getting started with AI in conservation? Check out our introduction tutorial, How Do I Train My First Machine Learning Model? with Daniel Situnayake, and our Virtual Meetup on Big Data. If you're coming from the more technical side of AI/ML, Sara Beery runs an AI for Conservation slack channel that might be of interest. Message her for an invite.
Header Image: Dr Claire Burke / @CBurkeSci
Explore the Basics: AI
Understanding the possibilities for incorporating new technology into your work can feel overwhelming. With so many tools available, so many resources to keep up with, and so many innovative projects happening around the world and in our community, it's easy to lose sight of how and why these new technologies matter, and how they can be practically applied to your projects.
Machine learning has huge potential in conservation tech, and its applications are growing every day! But the tradeoff of that potential is a big learning curve - or so it seems to those starting out with this powerful tool!
To help you explore the potential of AI (and prepare for some of our upcoming AI-themed events!), we've compiled simple, key resources, conversations, and videos to highlight the possibilities:
Three Resources for Beginners:
- Everything I know about Machine Learning and Camera Traps, Dan Morris | Resource library, camera traps, machine learning
- Using Computer Vision to Protect Endangered Species, Kasim Rafiq | Machine learning, data analysis, big cats
- Resource: WildID | WildID
Three Forum Threads for Beginners:
- I made an open-source tool to help you sort camera trap images | Petar Gyurov, Camera Traps
- Batch / Automated Cloud Processing | Chris Nicolas, Acoustic Monitoring
- Looking for help with camera trapping for Jaguars: Software for species ID and database building | Carmina Gutierrez, AI for Conservation
Three Tutorials for Beginners:
- How do I get started using machine learning for my camera traps? | Sara Beery, Tech Tutors
- How do I train my first machine learning model? | Daniel Situnayake, Tech Tutors
- Big Data in Conservation | Dave Thau, Dan Morris, Sarah Davidson, Virtual Meetups
Want to know more about AI, or have your specific machine learning questions answered by experts in the WILDLABS community? Make sure you join the conversation in our AI for Conservation group!
- @TaliaSpeaker
- | She/her
WILDLABS & World Wide Fund for Nature/ World Wildlife Fund (WWF)
I'm the WILDLABS Research Specialist at WWF-US
- 12 Resources
- 55 Discussions
- 24 Groups
- @carlybatist
- | she/her
Science Outreach Lead-Rainforest Connection (RFCx) & Arbimon; Ecoacoustics, biodiversity monitoring, primates, lemurs
- 69 Resources
- 284 Discussions
- 18 Groups
Worked as a mechanical engineer for a defence co, then software engineer, then for a research lab specialising in underwater robotics.
- 1 Resources
- 137 Discussions
- 16 Groups
My main interest is in the application of geospatial technology for ecology and conservation
- 0 Resources
- 12 Discussions
- 6 Groups
- @HRees
- | Him/His
WILDLABS & Fauna & Flora
WILDLABS - Programme Development Manager, keen interest in bats, hyaenas and tech!
- 5 Resources
- 2 Discussions
- 4 Groups
- @capreolus
- | he/him
Capreolus e.U.
wildlife biologist with capreolus.at
- 1 Resources
- 69 Discussions
- 16 Groups
- @bradnahill
- | he / him
SEE Turtles
Brad has worked in sea turtle conservation & ecotourism for 20+ years. He is the lead writer of Sea Turtle Research and Conservation and co-author of the Worldwide Travel Guide to Sea Turtles. He has won the President's Award from the International Sea Turtle Society.
- 1 Resources
- 4 Discussions
- 6 Groups
- 0 Resources
- 0 Discussions
- 3 Groups
- @ZoeDagan
- | She/her
I'm Zoe, an ecologist at the intersection of SaaS, conservation, and community science. I build programs and advance projects that accelerate solutions to our most urgent climate and conservation challenges.
- 0 Resources
- 4 Discussions
- 10 Groups
I help conservation scientists spend less time on boring stuff.
- 0 Resources
- 24 Discussions
- 6 Groups
2023 Bachelor Degree Graduate from the College of African Wildlife Management, Mweka.
- 0 Resources
- 3 Discussions
- 14 Groups
- @crazybirdguy
- | Him
Field Biologist at Yayasan Cikananga Konservasi Terpadu, Indonesia, with experience and interest mainly in ornithology, citizen science and bioaccoustic
- 0 Resources
- 6 Discussions
- 12 Groups
Join my brand new group at MIT in Fall 2023!!
12 September 2022
Careers
The BirdNet team are seeking to fill three roles, including an Ecologist, Data scientist and an Embedded Systems Engineer based in Germany.
9 September 2022
At the Inria Sophia Antipolis - Méditerranée center. The project will pursue two different methodological goals: (1) explore the use of natural language bottlenecks describing visible traits or other visual...
1 September 2022
This is the first in a series by David Thau on the promise and challenges to using AI and machine learning to create a planetary environmental management system.
31 August 2022
Article
APPLY NOW! The Sovereign Nature Initiative has partnered with the Kenya Wildlife Trust to experiment with emerging technologies to support their predators' conservation work.Challenges will focus on:1. Lion...
30 August 2022
This project is part of the Conservation AI Network. We aim to help threatened species & manage invasive species using leading edge analytics and artificial intelligence algorithms. The primary purpose of the...
29 August 2022
Boost cons tech capacity at an international NGO! Fauna & Flora International is offering a paid three-month internship to consolidate and share best practices for the application of emerging hardware and software...
26 August 2022
The Department of Wildlife, Fish, and Environmental Studies (WFE), SLU, Umeå, is looking for a postdoc with strong interests in wildlife conservation technology.
26 August 2022
Article
An update on Ceres Tags products that are being used in conservation
22 August 2022
Careers
Job opening at ARISE, an innovative program in the Netherlands to build a digital infrastructure for biodiversity data and services
19 August 2022
Are you creative, love new challenges and have experience developing software? The Wildlife Insights team is hiring! Join a diverse team of ecologists, data scientists, engineers and machine learning experts to protect...
10 August 2022
The Marine Robotics and Remote Sensing (MaRRS) Lab at Duke University seeks a highly motivated UAS pilot and geospatial analyst to support the ongoing development of new and existing research and conservation programs,...
10 August 2022
September 2024
event
October 2024
May 2024
April 2024
event
event
47 Products
Recently updated products
13 Products
Recently updated products
Description | Activity | Replies | Groups | Updated |
---|---|---|---|---|
Some good points. Why not use a plane and fit it with high def camera + infrared sensor (+ possibly Lidar). You can fly higher and cover a much larger area in the same time. ... |
+5
|
AI for Conservation | 8 seconds ago | |
Thank you so much for your support. I am finding it really difficult to find the funding for the initial development. We need lots of engineering time to refine our detection and... |
+15
|
AI for Conservation, Drones, Emerging Tech, Human-Wildlife Conflict, Wildlife Crime | 1 day 3 hours ago | |
Yes, this system is designed to be installed near farms. We also have the repeller system with audio & light, that is battery & solar powered. This system is a "last line... |
|
AI for Conservation | 1 week 1 day ago | |
Yes, exactly! Alec and I are working together on this. |
|
AI for Conservation | 1 week 5 days ago | |
Undoubted things will quickly evolve from just "straight" ChatGPTn, BARD, ClaudeAI, etc "standard" models, to more specialized Retrieval Augmentation Generation (RAG) , where... |
+42
|
AI for Conservation, Emerging Tech | 2 weeks 2 days ago | |
This is so cool! I am 1000% going to see if they want to come talk about it at Variety Hou! |
|
AI for Conservation, Citizen Science | 2 weeks 3 days ago | |
Hi Sol,If the maximum depth is 30m, it would be worth experimenting with HydroMoth in this application especially if the deployment time is short. As Matt says, the air-filed case... |
|
Acoustics, AI for Conservation, Data management and processing tools, Emerging Tech, Sustainable Fishing Challenges | 2 weeks 3 days ago | |
Online citizen science platforms like iNaturalist and Macaulay Library contain a wealth of images but are hard to search using text. We are... |
|
AI for Conservation, Citizen Science | 2 weeks 5 days ago | |
We're seeking training data for AI for wolf ID - we at T4C manage 3 Wildbook platforms: Wild North, Whiskerbook and the... |
|
AI for Conservation | 3 weeks ago | |
Hi Phani,An entry point might be to participate in a challenge related to conservation on:KaggleDrivenDataFruitPunchMax Planck Institute of Animal BehaviorYou could also reach out... |
|
AI for Conservation | 3 weeks 6 days ago | |
[oops, the same reply got submitted twice and there doesn't seem to be a "delete" button] |
|
AI for Conservation, Camera Traps | 3 weeks 6 days ago | |
Hi @zhongqimiao ,Might you have faced such an issue while using mega detectorThe conflict is caused by:pytorchwildlife 1.0.2.13 depends on torch==1.10.1pytorchwildlife 1.0.2.12... |
+6
|
AI for Conservation, Camera Traps, Open Source Solutions | 3 weeks 6 days ago |
Has anyone combined flying drone surveys with AI for counting wild herds?
14 April 2024 3:40pm
21 May 2024 10:53am
There is a South African company Avior labs that successfully does drone surveys on game farms using AI to count and identify wildlife. The trick is to use both infra-red and optical camera simultaneously getting the AI to use the infra-red to narrow down where there are animals and then only use the optical camera feed of those areas to identify the species. Technically it is a lot trickier than it sounds but they have an effective AI workflow.
As to altitude for most countries there are restrictions to 120m above ground level for drone flights and to cover a significant area you need a relatively fast and really long endurance drone given that it has to fly survey grids. So using a 50mm lens on a 24.1megapixels DSLR camera your field of view would be 90 meters and your resolution would be 1.5cm per pixel. You would want to fly grids with overlap so you would cover 50m in a single transect. So to survey an area 10X10km you would need to fly 200 transects of 10 km i.e. 2000km, if you were flying at 135km an hour that would be 14 hours of flying if you took no time to turn around.
I am just pointing out the limitations that I have come across - nothing one cannot address but one has to take into account.
I have come to the conclusion that drones have not yet met their promise in conservation because we seriously underestimate the endurance needed for drones to become a daily conservation tool, doing game counts, surveying for vegetation changes, looking for snares.
I am trying to motivate for the investment in developing a long-endurance hydrogen fuel cell conservation drone with an endurance of 10 hours, speed of 135km and hour and the ability to run on green hydrogen generated on site through hydrolysis of water using solar power.
21 May 2024 4:21pm
Some good points. Why not use a plane and fit it with high def camera + infrared sensor (+ possibly Lidar). You can fly higher and cover a much larger area in the same time.
One thing which isn't addressed if just using a camera is that many animals are missed because they're under cover. For many species and in many areas aerial surveys are pretty unreliable, but people stuck with them as they're fast and produce results.
DeepDive: estimating global biodiversity patterns through time using deep learning
20 May 2024 4:51pm
These authors "develop an approach based on stochastic simulations of biodiversity and a deep learning model to infer richness at global or regional scales through time while incorporating spatial, temporal and taxonomic sampling variation."
Mass Detection of Wildlife Snares Using Airborne Synthetic Radar
7 January 2024 6:50am
18 May 2024 2:42pm
I really hope you get this funding! You are so right about increasing the cost of poaching as a deterrent (snares destroyed, investment in snare cables and logistics all for nothing....) - hopefully you are going to want to trial this system in moist tropical rainforests next - hope you decide Kerinci Seblat, Sumatra an ideal candidate site! (Yes, I am very, very enthusiastic)
20 May 2024 8:05am
Yes, it is really important to distinguish "noise" from real snares.
Having rangers respond to false positives will really be detrimental to the whole project. Too many false positive resulting in them going out and not finding snare will in the long term mean that they will not respond to distant snare alerts, assuming that they might just be metal cans etc.
Classification of targets will be dependent on the interaction of the target with the 4 polarizations of the radio waves of the radar signal and the certainty of classification will be displayed eg. :
Target-Snare; Location: -31.71130, 24.56327; Classification Accuracy: 99%, Time Detected: 08:53
Target-Bicycle; Location: -31.71130, 24.56327; Classification Accuracy: 32%, Time Detected: 08:55
Target-Chainsaw; Location: -31.71130, 24.56327; Classification Accuracy: 40%, Time Detected: 08:53
Target-Aluminum can; Location: -31.71130, 24.56327; Classification Accuracy: 80%, Time Detected: 08:55
These detections will be sent as alerts to rangers, EarthRanger will monitor the response to them and what the rangers found and exactly where, together with an uploaded photograph of what was found. These will be fed into the detection and classification algorithms to result in a constant improvement of detection and classification under different circumstances
20 May 2024 8:11am
Thank you so much for your support. I am finding it really difficult to find the funding for the initial development. We need lots of engineering time to refine our detection and trial it in ever more complex habitats. We really need money for a well-qualified electronic engineer competent in signal processing to work on this full-time as my PhD student, has to hold down a full-time job as Radar lead for a satellite company.
Successfully integrated deepfaune into video alerting system
2 December 2023 11:15am
3 May 2024 3:07pm
Yeah. I’ve seen the video. Very nice. Good luck with that ! Let us know how it goes.
4 May 2024 12:44pm
Hi Thijs, the use of that inflatable device to scare off bears suggests that the location you are using it has significant power available.
Is this a common situation for the places in Romania that have bear trouble ? Because I think your other systems were running off batteries is that correct ?
12 May 2024 2:59pm
Yes, this system is designed to be installed near farms. We also have the repeller system with audio & light, that is battery & solar powered. This system is a "last line of defence". The blowers alone requires 1000 watts :)
4th International Workshop onCamera Traps, AI, and Ecology
9 May 2024 1:00pm
Harnessing large language models for coding, teaching and inclusion to empower research in ecology and evolution
9 May 2024 12:51pm
Check out this paper that reviews the current state of AI in conservation.
Indigenous communities and AI for Conservation
8 May 2024 12:32pm
8 May 2024 5:09pm
Thank you for this advice!
If you need a speaker for Variety hour, I would be happy to talk about the work we are doing in the Conservation Evidence Group to use LLMs for finding and reviewing evidence of conservation actions.
8 May 2024 7:04pm
Oh yeah that would be awesome! Let me email you to follow up. I assume you're working with Alec Christie then? He was sharing your team's work in our chatgpt discussion:
9 May 2024 10:26am
Yes, exactly! Alec and I are working together on this.
Voices of Sustainability: Perspectives from - Africa Wholesome Sustainability Explained: What is E-PIE
7 May 2024 3:06am
ChatGPT for conservation
16 January 2023 10:04am
2 May 2024 9:39pm
In my experience, ChatGPT-4 performs significantly better than version 3.5, especially in terms of contextual understanding. However, like any AI model, inaccuracies cannot be completely eliminated. I've also seen a video showing that Gemini appears to excel at literature reviews, though I haven't personally tested it yet. Here's the link to the video: https://www.youtube.com/watch?v=sPiOP_CB54A.
4 May 2024 6:44am
While GPT3.5 is good for some activities, GPT-4 and GPT4-turbo are much better. Anthropic Claude is also very good, on a par with GPT4 for many tasks. As someone else has mentioned, the key is in the prompt you use, though chatGPT is continually being extended to allow more contextual information to be included, for example external files that have been uploaded previously. Code execution and image generation are also possible with the paid version of chatGPT, and the latest models include data up to the end of 2023 (I think). You can also include calls to openAI or other APIs programatically to include these in your workflows for assisting with a variety of tasks.
Regarding end results - as always, we're responsible for whatever outputs are ultimately published/shared etc.
For Conservation Evidence - you could try making your own GPT (chatGPT assistant) that can be published/shared using your own evidence base and prompt that should be well grounded and provide good responses (I should think). But don't use 3.5 for that, IMO.
4 May 2024 8:28pm
Undoubted things will quickly evolve from just "straight" ChatGPTn, BARD, ClaudeAI, etc "standard" models, to more specialized Retrieval Augmentation Generation (RAG) , where facts from authoritative sources and rules are supplied as context for the LLM to summarize in its response. You can direct ChatGPT and BARD: "Your response must be based on the reference sections provided" up to a few K of tokens. A huge amount of work is going into properly indexing reference materials in order to supply context to the reference models. Folks like FAO and CGIAR are indexing all their agricultural knowledge to feed the standard ones with location, crop, livestock, etc specialty "knowledge" to provide farmers automated advice via mobile phones, etc. I can totally see the same for such mundane things as "how do I ... using ArcMAP or QGIS?" purely based on the vast amount of documentation and tutorials. Google, ChatGPT, etc do a really good job already; this is just totally focusing its response to the body of knowledge known in advance to be relevant.
I would highly recommend folks do some searching on "LLM RAG" - that's what going nuts now across the board.
Then there's stuff I like to call "un-SQL" ... unstructured query language .. that will take free-form queries to form SQL queries, with supporting visualization code.
see:
"https://mlnotes.substack.com/p/no-more-text2sql-its-now-rag2sql"
"http://censusgpt.com"
etc.
As far as writing and evaluating proposals, I saw a paper on how summarization of public review forms are being developed in several cities.
see: "http://streetleveladvisors.com/?p=181562"
And that's just the standard LLMs; super-specialized LLMs based on Facebook Llama are being built purely based on domain-specific bodies of dialog - medical, etc. LOTS of Phds to be done.
I think what will be critical in all this are strong audit trails and certification mechanisms to gain trust. Especially when it comes to deceptive simple terms like "best"
Chris
AI & Gamified Citizen Science
3 May 2024 7:24am
3 May 2024 5:09pm
Check out FathomVerse, a new game by MBARI folks for involving citizen scientists in improving algorithms to ID deep sea critters!
3 May 2024 8:28pm
This is so cool! I am 1000% going to see if they want to come talk about it at Variety Hou!
Travel grants for insect monitoring an AI
3 May 2024 5:20pm
Drop-deployed HydroMoth
2 April 2024 10:20am
15 April 2024 6:53am
Hi Matthew,
Thanks for your advice, this is really helpful!
I'm planning to use it in a seagrass meadow survey for a series of ~20 drops/sites to around 30 m, recording for around 10 minutes each time, in Cornwall, UK.
At this stage I reckon we won't exceed 30 m, but based on your advice, I think this sounds like not the best setup for the surveys we want to try.
We will try the Aquarian H1a, attached to the Zoom H1e unit, through a PVC case. This is what Aquarian recommended to me when I contacted them too.
Thanks for the advice, to be honest the software component is what I was most interested in when it came to the AudioMoth- is there any other open source software you would recommend for this?
Best wishes,
Sol
21 April 2024 7:10pm
Hey Sol,
No problem at all. Depending on your configuration, the Audiomoth software would have to work on a PCB with an ESP32 chip which is the unit on the audiomoth/hydromoth, so you would have to make a PCB centered around this chip. You could mimic the functionality of the audiomoth software on another chip, like on a raspberry pi with python's pyaudio library for example. The problem you would have is that the H1A requires phantom power, so it's not plug and play. I'm not too aware with the H1e, but maybe you can control the microphone through the recorder that is programmable through activations by the RPi (not that this is the most efficient MCU for this application, but it is user friendly). A simpler solution might be to just record continuously and play a sound or take notes of when your 10 min deployment starts. I think it should last you >6 hours with a set of lithium energizer batteries. You may want to think about putting a penetrator on the PVC housing for a push button or switch to start when you deploy. They make a few waterproof options.
Just somethign else that occured to me, but if you're dropping these systems, you'll want to ensure that the system isn't wobbling in the seagrass as that will probably be all you will hear on the recordings, especially if you plan to deploy shallower. For my studies in Curacao, we aim to be 5lbs negative, but this all depends on your current and surface action. You might also want to think about the time of day you're recording biodiversity in general. I may suggest recording the site for a bit (a couple days or a week) prior to your study to see what you should account for (e.g. tide flow/current/anthropogenic disturbance) and determine diel patterning of vocalizations you are aiming to collect if subsampling at 10 minutes.
Cheers,
Matt
3 May 2024 12:55pm
Hi Sol,
If the maximum depth is 30m, it would be worth experimenting with HydroMoth in this application especially if the deployment time is short. As Matt says, the air-filed case means it is not possible to accurately calibrate the signal strength due to the directionality of the response. For some applications, this doesn't matter. For others, it may.
Another option for longer/deeper deployments would be an Aquarian H2D hydrophone which will plug directly into AudioMoth Dev or AudioMoth 1.2 (with the 3.5mm jack added). You can then use any appropriately sized battery pack.
If you also connect a magnetic switch, as per the GPS board, you can stop and start recording from outside the housing with the standard firmware.
Alex
AI-enabled image query system
2 May 2024 2:16am
Elephant Collective Behaviour Project - Principal Investigator
1 May 2024 1:59pm
The Inventory User Guide
1 May 2024 12:46pm
Introducing The Inventory!
1 May 2024 12:46pm
2 May 2024 3:08pm
3 May 2024 5:33pm
17 May 2024 7:29am
Hiring Chief Engineer at Conservation X Labs
1 May 2024 12:19pm
AI for wolf ID
29 April 2024 7:09pm
AI volunteer work
3 February 2024 12:29pm
24 April 2024 10:59am
Hi Phani,
An entry point might be to participate in a challenge related to conservation on:
You could also reach out to a conservation organization (e. g. WWF or something smaller and more local) and ask them directly whether there's an opportunity for you to volunteer, perhaps even suggest an idea and maybe they find it useful.
I hope you find an opportunity you're looking for!
MegaDetector v5 release
20 June 2022 9:06pm
23 April 2024 9:43pm
Hi @dmorris,
might you have encountered this issue while working with Mega detector v5?
The conflict is caused by:
pytorchwildlife 1.0.2.13 depends on torch==1.10.1
pytorchwildlife 1.0.2.12 depends on torch==1.10.1
pytorchwildlife 1.0.2.11 depends on torch==1.10.1
if yes what solution helped?
23 April 2024 10:38pm
I'm sorry, I don't use PyTorch-Wildlife; I recommend filing an issue on their repo. Good luck!
23 April 2024 10:38pm
[oops, the same reply got submitted twice and there doesn't seem to be a "delete" button]
Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation (v1.0)
21 February 2024 10:30pm
26 February 2024 11:58pm
This is great, thank you so much @zhongqimiao ! I will check it out and looking forward for the upcoming tutorial!
17 April 2024 11:07am
Hi everyone! @zhongqimiao was kind enough to join Variety Hour last month to talk more about Pytorch-Wildlife, so the recording might be of interest to folks in this thread. Catch up here:
23 April 2024 9:48pm
Hi @zhongqimiao ,
Might you have faced such an issue while using mega detector
The conflict is caused by:
pytorchwildlife 1.0.2.13 depends on torch==1.10.1
pytorchwildlife 1.0.2.12 depends on torch==1.10.1
pytorchwildlife 1.0.2.11 depends on torch==1.10.1
if yes how did you solve it, or might you have any ideas?
torch 1.10.1 doesn't seem to exist
Navigating AI and good governance for INGOs
23 April 2024 10:27am
Program Manager: Integrating movement and camera trap data with international conservation policy
22 April 2024 10:16pm
Postdoc: Biologging & Camera Trap Data Integration
22 April 2024 10:10pm
WILDLABS AWARDS 2024 - No-code custom AI for camera trap species classification
5 April 2024 7:00pm
10 April 2024 3:55am
Happy to explain for sure. By Timelapse I mean images taken every 15 minutes, and sometimes the same seals (anywhere from 1 to 70 individuals) were in the image for many consecutive images.
17 April 2024 5:53pm
Got it. We should definitely be able to handle those images. That said, if you're just looking for counts, then I'd recommend running Megadetector which is an object detection model and outputs a bounding box around each animal.
21 April 2024 5:19pm
Hi, this is pretty interesting to me. I plan to fly a drone over wild areas and look for invasive species incursions. So feral hogs are especially bad, but in the Everglades there is a big invasion of huge snakes. In various areas there are big herds of wild horses that will eat themselves out of habitat also, just to name a few examples. Actually the data would probably be useful in looking for invasive weeds, that is not my focus but the government of Canada is thinking about it.
Does your research focus on photos, or can you analyze LIDAR? I don't really know what emitters are available to fly over an area, or which beam type would be best for each animal type. I know that some drones carry a LIDAR besides a camera for example. Maybe a thermal camera would be best to fly at night.
4th International workshop on vocal interactivity in-and-between humans, animals, and robots
19 April 2024 3:03pm
WILDLABS AWARDS 2024 – MothBox
15 April 2024 5:06am
18 April 2024 10:39am
Already an update from @hikinghack:
19 April 2024 12:00pm
Yeah we got it about as bare bones as possible for this level of photo resolution and duration in the field. The main costs right now are:
Pi- $80
Pijuice -$75
Battery - $85
64mp Camera - $60
which lands us at $300 already. But we might be able to eliminate that pijuice and have fewer moving parts, and cut 1/4 of our costs! Compared to something like just a single logitech brio camera that sells for $200 and only gets us like 16mp, we are able to make this thing as cheap as we could figure out! :)
19 April 2024 12:54pm
Gotcha, well I look forward to seeing future iterations and following along with your progress!!
Early Warning Systems for Human-Wildlife Conflict, Zoonotic Spillover, and Other Conservation Challenges
17 April 2024 5:43pm
Faces, Flukes, Fins and Flanks: How multispecies re-ID models are transforming Wild Me's work
17 April 2024 11:10am
17 May 2024 12:22pm
Awesome! I am looking at something similar too - also awaiting funding. It's in SE Australia, also focusing on invasive species (red fox), protecting an area that has nesting & migratory shorebirds.
I was looking at infrared images, which will present a new challenge for AI/ML coding as there isn't much by way of training images to work with.
I think a lot depends on whether you are hoping to do the image recognition in real-time (during flight) or after downloading the images.
Also flying transects seems better supported for statistical modelling than camera trap images.
Very interested to hear how it goes.