With new technologies revolutionizing data collection, wildlife researchers are becoming increasingly able to collect data at much higher volumes than ever before. Now we are facing the challenges of putting this information to use, bringing the science of big data into the conservation arena. With the help of machine learning tools, this area holds immense potential for conservation practices. The applications range from online trafficking alerts to species-specific early warning systems to efficient movement and biodiversity monitoring and beyond.
However, the process of building effective machine learning tools depends upon large amounts of standardized training data, and conservationists currently lack an established system for standardization. How to best develop such a system and incentivize data sharing are questions at the forefront of this work. There are currently multiple AI-based conservation initiatives, including Wildlife Insights and WildBook, that are pioneering applications on this front.
This group is the perfect place to ask all your AI-related questions, no matter your skill level or previous familiarity! You'll find resources, meet other members with similar questions and experts who can answer them, and engage in exciting collaborative opportunities together.
Just getting started with AI in conservation? Check out our introduction tutorial, How Do I Train My First Machine Learning Model? with Daniel Situnayake, and our Virtual Meetup on Big Data. If you're coming from the more technical side of AI/ML, Sara Beery runs an AI for Conservation slack channel that might be of interest. Message her for an invite.
Header Image: Dr Claire Burke / @CBurkeSci
Explore the Basics: AI
Understanding the possibilities for incorporating new technology into your work can feel overwhelming. With so many tools available, so many resources to keep up with, and so many innovative projects happening around the world and in our community, it's easy to lose sight of how and why these new technologies matter, and how they can be practically applied to your projects.
Machine learning has huge potential in conservation tech, and its applications are growing every day! But the tradeoff of that potential is a big learning curve - or so it seems to those starting out with this powerful tool!
To help you explore the potential of AI (and prepare for some of our upcoming AI-themed events!), we've compiled simple, key resources, conversations, and videos to highlight the possibilities:
Three Resources for Beginners:
- Everything I know about Machine Learning and Camera Traps, Dan Morris | Resource library, camera traps, machine learning
- Using Computer Vision to Protect Endangered Species, Kasim Rafiq | Machine learning, data analysis, big cats
- Resource: WildID | WildID
Three Forum Threads for Beginners:
- I made an open-source tool to help you sort camera trap images | Petar Gyurov, Camera Traps
- Batch / Automated Cloud Processing | Chris Nicolas, Acoustic Monitoring
- Looking for help with camera trapping for Jaguars: Software for species ID and database building | Carmina Gutierrez, AI for Conservation
Three Tutorials for Beginners:
- How do I get started using machine learning for my camera traps? | Sara Beery, Tech Tutors
- How do I train my first machine learning model? | Daniel Situnayake, Tech Tutors
- Big Data in Conservation | Dave Thau, Dan Morris, Sarah Davidson, Virtual Meetups
Want to know more about AI, or have your specific machine learning questions answered by experts in the WILDLABS community? Make sure you join the conversation in our AI for Conservation group!
- @pchwalek
- | He/him/his
I'm a PhD candidate in the Responsive Environments Group, working on electronic systems for human and wildlife monitoring.
- 0 Resources
- 0 Discussions
- 4 Groups
- @capreolus
- | he/him
Capreolus e.U.
wildlife biologist with capreolus.at
- 1 Resources
- 68 Discussions
- 16 Groups
- 0 Resources
- 0 Discussions
- 8 Groups
- 0 Resources
- 0 Discussions
- 6 Groups
TerrOïko
PhD Student in statistical ecology
- 0 Resources
- 0 Discussions
- 7 Groups
TerrOïko
I am an ecological data engineer at Terroïko, where I work on OCAPI, a platform for semi-automatic camtrap data annotation, biodiversity data interoperability and biodiversity indicators.
- 0 Resources
- 10 Discussions
- 6 Groups
TerrOïko
R&D Engineer
- 0 Resources
- 0 Discussions
- 6 Groups
Polytechnic University of Catalonia (UPC)
- 0 Resources
- 0 Discussions
- 1 Groups
Asst. Prof @ MIT with research at the intersection of computer vision, biodiversity monitoring, conservation, and sustainability.
- 2 Resources
- 14 Discussions
- 5 Groups
Centre national de la recherche scientifique (CNRS)
Behavioural ecologist @CNRS in France - working mostly on ungulates in Europe and Africa
- 0 Resources
- 5 Discussions
- 6 Groups
- @adanger24
- | She/Her
Arribada Initiative
Senior Project Manager and Field Specialist
- 2 Resources
- 10 Discussions
- 10 Groups
- @MattyD797
- | He/Him
I am studying biotic problems with abiotic intelligence. My research focus is in computational ecology within fishery acoustics, machine learning, remote sensing, and combining visual and audio species identification systems.
- 0 Resources
- 7 Discussions
- 12 Groups
Save the Elephants is seeking a Principal Investigator for our Elephant Collective Behaviour Project. This role will spearhead research initiatives using cutting-edge video analysis tools to study elephant group...
1 May 2024
Article
Read in detail about how to use The Inventory, our new living directory of conservation technology tools, organisations, and R&D projects.
1 May 2024
Article
The Inventory is your one-stop shop for conservation technology tools, organisations, and R&D projects. Start contributing to it now!
1 May 2024
Technology to End the Sixth Mass Extinction. Salary: $132 - $160k; Location: Seattle WA; 7+ years of experience in hardware product development and manufacturing; View post for full job description
1 May 2024
Review by Professor Iain H Woodhouse
29 April 2024
Careers
The Smithsonian National Zoo & Conservation Biology Institute is seeking a Program Manager to help coordinate multiple organizations in an effort to integrate movement data & camera trap data with global...
22 April 2024
The Smithsonian National Zoo & Conservation Biology Institute is seeking a Postdoctoral Research Fellow to help us integrate movement data & camera trap data with global conservation policy.
22 April 2024
Join the Seeed Vision Challenge, an opportunity for conservation innovators to harness the power of AI vision sensors for environmental monitoring and protection.
10 April 2024
Full-Stack Software Developer (Python/React) - Specializing in AI/ML for Wildlife Conservation
5 April 2024
Article
Article from Ars Technica about how difficult it is to detect and avoid kangaroos...
3 April 2024
18 month postdoc research position, Netherlands, EU-funded
28 March 2024
Article
You’re invited to the WILDLABS Variety Hour, a monthly event that connects you to conservation tech's most exciting projects, research, and ideas. We can't wait to bring you a whole new season of speakers and...
22 March 2024
May 2024
September 2024
event
October 2024
April 2024
event
event
43 Products
Recently updated products
12 Products
Recently updated products
Description | Activity | Replies | Groups | Updated |
---|---|---|---|---|
Hi Steph, This should be a simple project. Recently I came across a website with a sample video I am not sure whether it was from the wild Labs website. Where a camera is... |
|
AI for Conservation, Camera Traps | 1 year 2 months ago | |
Bluesky have a commercial tree crown dataset available covering most of Great Britain (England, Wales and parts of Scotland). There is a canopy layer with approximate outlines of... |
|
AI for Conservation, Drones | 1 year 2 months ago | |
Rainforest Connection's (RFCx) Guardian devices may be of interest. They are solar-powered and have connectivity options for Wifi, GSM and satellite transfer. They've previously... |
|
Acoustics, AI for Conservation, Connectivity, Data management and processing tools, Protected Area Management Tools, Sensors | 1 year 2 months ago | |
My original background is in ecology and conservation, and am now in the elected leadership of the Gathering for Open Science Hardware which convenes researchers developing open... |
|
AI for Conservation, Biologging, Camera Traps, Conservation Tech Training and Education, Data management and processing tools, Drones, Emerging Tech, Sensors | 1 year 3 months ago | |
Hi Sophie, Can you please help me or get in touch in developing a system where we are able to detect an Elephant? Would like to discuss more about it. Kindly treat this as urgent!! |
+8
|
AI for Conservation | 1 year 3 months ago | |
Hello All - @sarabeery et Al have just put a pre-print out on their educational insights into teaching Computer Vision to ecologists. I... |
|
Acoustics, AI for Conservation, Conservation Tech Training and Education, Early Career, Emerging Tech | 1 year 3 months ago | |
The Conservation Technology Lab at San Diego Zoo seeks undergrads for summer projects in computer vision, machine learning, bioacoustics,... |
|
Acoustics, AI for Conservation, Conservation Tech Training and Education | 1 year 4 months ago | |
I just came across this interesting paper in which seismic monotoring of animals like elephants was mentioned. This is the study refered to:Cheers,Lars |
|
AI for Conservation, Camera Traps, Emerging Tech, Ethics of Conservation Tech, Human-Wildlife Conflict, Remote Sensing & GIS, Sensors | 1 year 4 months ago | |
Quick reminder that the deadline for applications is just shy of a week away. This workshop is particularly geared to teach ecologists computer vision tools to apply to their... |
|
Conservation Tech Training and Education, AI for Conservation | 1 year 4 months ago | |
Hahaha, now I see why you were asking ... |
|
AI for Conservation | 1 year 5 months ago | |
Hi all, one of the 8 MozFest 2023 spaces is 'Tech & Biodiversity', and the organisers seek input for an event on the intersection of... |
|
AI for Conservation, Ethics of Conservation Tech, Open Source Solutions | 1 year 5 months ago | |
Out of curiosity, what are the similarities/differences between your platform and other image classification ones such as Wildlife Insights, WildID, ZambaCloud? I don't mean that... |
|
AI for Conservation, Emerging Tech | 1 year 6 months ago |
ChatGPT for conservation
16 January 2023 10:04am
2 May 2024 4:56pm
This is so interesting - I would love to chat more about this as I've been thinking about this a lot for how we could (or shouldnt) be thinking about incorporating this into WILDLABS. Do you want to come on the Variety Hour and share more about you've been on this front at conservation evidence? Next call is on the 29th May!
2 May 2024 9:39pm
In my experience, ChatGPT-4 performs significantly better than version 3.5, especially in terms of contextual understanding. However, like any AI model, inaccuracies cannot be completely eliminated. I've also seen a video showing that Gemini appears to excel at literature reviews, though I haven't personally tested it yet. Here's the link to the video: https://www.youtube.com/watch?v=sPiOP_CB54A.
AI-enabled image query system
2 May 2024 2:16am
Elephant Collective Behaviour Project - Principal Investigator
1 May 2024 1:59pm
The Inventory User Guide
1 May 2024 12:46pm
Introducing The Inventory!
1 May 2024 12:46pm
1 May 2024 9:26pm
1 May 2024 10:12pm
2 May 2024 3:08pm
Hiring Chief Engineer at Conservation X Labs
1 May 2024 12:19pm
AI for wolf ID
29 April 2024 7:09pm
AI volunteer work
3 February 2024 12:29pm
24 April 2024 10:59am
Hi Phani,
An entry point might be to participate in a challenge related to conservation on:
You could also reach out to a conservation organization (e. g. WWF or something smaller and more local) and ask them directly whether there's an opportunity for you to volunteer, perhaps even suggest an idea and maybe they find it useful.
I hope you find an opportunity you're looking for!
Mass Detection of Wildlife Snares Using Airborne Synthetic Radar
7 January 2024 6:50am
19 April 2024 1:52pm
In my experience, the preference for trapping animals using different types of snares varies depending on factors such as traditional customs, geographical location, availability and accessibility of materials, terrain, ease of transporting materials, and the type of animal targeted, ranging from buffaloes to medium or small-sized antelope. Based on my experience working in open woodland savannah protected areas (where poachers prefer using wired snares to hunt big game and even small game) and in closed canopy rainforests (where poachers prefer using nylon snares to hunt medium to small-sized antelope). It would be great if the technology will be modified to be capable of detecting both types of snares.
24 April 2024 6:39am
Hi Godfrey, unfortunately the technology wont work on nylon snares. Radar is limited to detecting metal. What I am learning is that in Forest habitat where poachers are catching small antelope like duikers and suni's there is a higher proportion of thick nylon snares. In the areas where I operate more than 90% of the snares are metal, mainly multistranded cable (like brake cables) or single strand like fencing wire. The poachers use metal because the larger antelope like nyala, hartebeest, wildebeest, buffalo break nylon snares or can bite through them. The prefer multi-stranded wires like brake cable wire because they pull through the loop more reliably than single strand (fencing wire) and therefore are more effective. Multistranded wires are also more flexible and easier to coil up and travel with. Radio waves at around 2GHz can penetrate vegetation and forest canopy but cannot penetrate tree trunks and thick branches, so there is also a limitation there but it could be dealt with by having multiple passes on different flight paths over an area so snares shielded from detection by a tree trunk at one angle becomes detectable at another angle.
24 April 2024 7:45am
SYNTHETIC APERTURE RADAR WILL ALSO DETECT AND LOCATE CHAINSAWS, MOTORBIKES, BICYLCLES, FIREARMS, MACHETES in fact anything metal.
I have been concentrating on trying to get funding for Airborne Sythetic Aperture Radar on the basis of snare detection for 2 reasons:
- Detecting and prcisely locating snares will have the biggest conservation impact
- Initially running the detection algorithms will take place as post processing after a flight mission in the cloud. It is therefore betterr suited to statiic targets that will still be in the location recorded during the mission.
Post processing of the radar will shift to real-time onboard processing and reporting via a satellite connection, but this would take quite a lot more development.
Has anyone combined flying drone surveys with AI for counting wild herds?
14 April 2024 3:40pm
14 April 2024 6:33pm
Hi Johnathan!
Here are a few examples where UAVs and AI has been used to spot animals.
https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2656.13904
A google scholar search as this will find many more:
One thing often forgotten when considering UAVs for aerial surveys like these are that maximum height above ground is normally about 100-120m. This really limits the area one can cover.
Cheers,
Lars
21 April 2024 4:56pm
That was one of the things I was wondering about, the height that it can resolve animals at. At some resolution it must be able to tell different animals apart.
My application is for invasive herds, or uncontrolled large animal herds such as wild horses or urban deer. In phase 2 we apply contraceptives to them to humanely reduce numbers.
24 April 2024 1:35am
I'm not an expert in this field, but have been doing some self study for a local project... Resolving animals in an image is not purely height related - but rather a combination of height and focal length (distance between the camera lens and the image sensor) (with some other factors) - Ground Sampling Distance (GSD) or Spatial Resolution is often used interchangeably (but there are slight differences). Flying low with a wide angle lens, or high with a telephoto lens can have the same GSD...
See the following for general discussions of GSD (and how to calculate it) vs Spatial Resolution:
- https://www.propelleraero.com/blog/ground-sample-distance-gsd-calculate-drone-data/ or
- https://support.pix4d.com/hc/en-us/articles/202559809-Ground-sampling-distance-GSD-in-photogrammetry or
- https://www.researchgate.net/post/Whats-the-difference-between-pixel-size-and-grid-resolution
I don't know much about the use of LIDAR for identifying animals but this seems a very interesting article to start with: https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12219
As to what GSD/Spatial Resolution is needed: it depends on the animal size. It seems that 0.5cm GSD is best to recognize cattle size animals (www.sciencedirect.com/science/article/abs/pii/S0168169922000060), but elephants have been identified at 31 cm resolution in South Africa using satellite data (https://zslpublications.onlinelibrary.wiley.com/doi/full/10.1002/rse2.195).
For comparison, Google Earth images generally range from 60cm GSD and up to 20+m, depending on location (https://gis.stackexchange.com/questions/11395/spatial-resolution-of-google-earth-imagery).
Another pratical issue to deal with is that animals move - especially move fast when disturbed by low flying drones - and it could cause significant estimation issues.
Now if we can get 1cm GSD satellite images of large areas it would be REALLY helpful :-)
As an aside, my ideal scenario is to eventually replace current plane/helicopter based animal surveys with automated options. My specific study area is about 25000ha / 250km2 / 62000 acres, and 3 helicopters (with trained personnel) takes a full day to count animals - at a fairly high cost in local currency (ZAR 250k quoted). I had one estimate to do a image based survey with LIDAR, and just to image this large area at approximately 3cm GSD will take approximately a week of plane flying time for about the same cost...
MegaDetector v5 release
20 June 2022 9:06pm
23 April 2024 9:43pm
Hi @dmorris,
might you have encountered this issue while working with Mega detector v5?
The conflict is caused by:
pytorchwildlife 1.0.2.13 depends on torch==1.10.1
pytorchwildlife 1.0.2.12 depends on torch==1.10.1
pytorchwildlife 1.0.2.11 depends on torch==1.10.1
if yes what solution helped?
23 April 2024 10:38pm
I'm sorry, I don't use PyTorch-Wildlife; I recommend filing an issue on their repo. Good luck!
23 April 2024 10:38pm
[oops, the same reply got submitted twice and there doesn't seem to be a "delete" button]
Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation (v1.0)
21 February 2024 10:30pm
26 February 2024 11:58pm
This is great, thank you so much @zhongqimiao ! I will check it out and looking forward for the upcoming tutorial!
17 April 2024 11:07am
Hi everyone! @zhongqimiao was kind enough to join Variety Hour last month to talk more about Pytorch-Wildlife, so the recording might be of interest to folks in this thread. Catch up here:
23 April 2024 9:48pm
Hi @zhongqimiao ,
Might you have faced such an issue while using mega detector
The conflict is caused by:
pytorchwildlife 1.0.2.13 depends on torch==1.10.1
pytorchwildlife 1.0.2.12 depends on torch==1.10.1
pytorchwildlife 1.0.2.11 depends on torch==1.10.1
if yes how did you solve it, or might you have any ideas?
torch 1.10.1 doesn't seem to exist
Navigating AI and good governance for INGOs
23 April 2024 10:27am
Program Manager: Integrating movement and camera trap data with international conservation policy
22 April 2024 10:16pm
Postdoc: Biologging & Camera Trap Data Integration
22 April 2024 10:10pm
Drop-deployed HydroMoth
2 April 2024 10:20am
5 April 2024 2:04pm
Hi Sol! This seems like an awesome project! I have a few questions in response: Where were you thinking of deploying this payload and for how long?
Regarding hydromoth recorders, there have been several concerns that have popped up in my work with deploying the them at this depth because it's a contact type hydrophone which means it utilizes the case to transmit the sound vibrations of the marine soundscape to the microphone unlike the piezo element based hydrophones.
- At 30-60m you will likely have the case leak after an extended period of time if not immediately. The O-ring will deform at this depth, especially around the hinge of the housing. The square prism shape is not ideal for deep deployments you describe.
- After that depth and really starting at about 50m, a major concern is synthetic implosion from the small air pocket of the hydromoth not having a pressure release valve and lithium ion batteries getting exposed to salt water. This type of reaction would cause your other instruments to probably break or fail as well.
- You are unlikely to get a signal with a reinforced enclosure. The signal is generated via the material and geometry of the housing. The plastic will probably deform and mess with your frequency response and sound to noise ratio. If you place it against metal, it will dampen the sound quite a lot. We tried to do this, but the sensitivity is quite low with a large amount of self noise.
A side note: for biodiversity assessments, the hydromoth is not characterized and is highly directional, so you wouldn't be able to compare sites through your standard aocustic indices like ACI and SPL.
That said if you are deploying for a short time, a hydrophone like an Aquarian H1a attached through a penetrator of a blue robotics housing that contains a field recorder like a zoom recorder may be optimal for half a day and be relatively cheaper than some of the other options. You could also add another battery pack in parrallel for a longer duration.
15 April 2024 6:53am
Hi Matthew,
Thanks for your advice, this is really helpful!
I'm planning to use it in a seagrass meadow survey for a series of ~20 drops/sites to around 30 m, recording for around 10 minutes each time, in Cornwall, UK.
At this stage I reckon we won't exceed 30 m, but based on your advice, I think this sounds like not the best setup for the surveys we want to try.
We will try the Aquarian H1a, attached to the Zoom H1e unit, through a PVC case. This is what Aquarian recommended to me when I contacted them too.
Thanks for the advice, to be honest the software component is what I was most interested in when it came to the AudioMoth- is there any other open source software you would recommend for this?
Best wishes,
Sol
21 April 2024 7:10pm
Hey Sol,
No problem at all. Depending on your configuration, the Audiomoth software would have to work on a PCB with an ESP32 chip which is the unit on the audiomoth/hydromoth, so you would have to make a PCB centered around this chip. You could mimic the functionality of the audiomoth software on another chip, like on a raspberry pi with python's pyaudio library for example. The problem you would have is that the H1A requires phantom power, so it's not plug and play. I'm not too aware with the H1e, but maybe you can control the microphone through the recorder that is programmable through activations by the RPi (not that this is the most efficient MCU for this application, but it is user friendly). A simpler solution might be to just record continuously and play a sound or take notes of when your 10 min deployment starts. I think it should last you >6 hours with a set of lithium energizer batteries. You may want to think about putting a penetrator on the PVC housing for a push button or switch to start when you deploy. They make a few waterproof options.
Just somethign else that occured to me, but if you're dropping these systems, you'll want to ensure that the system isn't wobbling in the seagrass as that will probably be all you will hear on the recordings, especially if you plan to deploy shallower. For my studies in Curacao, we aim to be 5lbs negative, but this all depends on your current and surface action. You might also want to think about the time of day you're recording biodiversity in general. I may suggest recording the site for a bit (a couple days or a week) prior to your study to see what you should account for (e.g. tide flow/current/anthropogenic disturbance) and determine diel patterning of vocalizations you are aiming to collect if subsampling at 10 minutes.
Cheers,
Matt
WILDLABS AWARDS 2024 - No-code custom AI for camera trap species classification
5 April 2024 7:00pm
10 April 2024 3:55am
Happy to explain for sure. By Timelapse I mean images taken every 15 minutes, and sometimes the same seals (anywhere from 1 to 70 individuals) were in the image for many consecutive images.
17 April 2024 5:53pm
Got it. We should definitely be able to handle those images. That said, if you're just looking for counts, then I'd recommend running Megadetector which is an object detection model and outputs a bounding box around each animal.
21 April 2024 5:19pm
Hi, this is pretty interesting to me. I plan to fly a drone over wild areas and look for invasive species incursions. So feral hogs are especially bad, but in the Everglades there is a big invasion of huge snakes. In various areas there are big herds of wild horses that will eat themselves out of habitat also, just to name a few examples. Actually the data would probably be useful in looking for invasive weeds, that is not my focus but the government of Canada is thinking about it.
Does your research focus on photos, or can you analyze LIDAR? I don't really know what emitters are available to fly over an area, or which beam type would be best for each animal type. I know that some drones carry a LIDAR besides a camera for example. Maybe a thermal camera would be best to fly at night.
4th International workshop on vocal interactivity in-and-between humans, animals, and robots
19 April 2024 3:03pm
WILDLABS AWARDS 2024 – MothBox
15 April 2024 5:06am
18 April 2024 10:39am
Already an update from @hikinghack:
19 April 2024 12:00pm
Yeah we got it about as bare bones as possible for this level of photo resolution and duration in the field. The main costs right now are:
Pi- $80
Pijuice -$75
Battery - $85
64mp Camera - $60
which lands us at $300 already. But we might be able to eliminate that pijuice and have fewer moving parts, and cut 1/4 of our costs! Compared to something like just a single logitech brio camera that sells for $200 and only gets us like 16mp, we are able to make this thing as cheap as we could figure out! :)
19 April 2024 12:54pm
Gotcha, well I look forward to seeing future iterations and following along with your progress!!
Early Warning Systems for Human-Wildlife Conflict, Zoonotic Spillover, and Other Conservation Challenges
17 April 2024 5:43pm
Faces, Flukes, Fins and Flanks: How multispecies re-ID models are transforming Wild Me's work
17 April 2024 11:10am
WILDLABS AWARDS 2024 - BumbleBuzz: automatic recognition of bumblebee species and behaviour from their buzzing sounds
12 April 2024 8:37am
12 April 2024 8:41pm
Super great to see that there will be more work on insect ecoacoustics! So prevalent in practically every soundscape, but so often over-looked. Can't wait to follow this project as it develops!
17 April 2024 10:23am
Thanks Carly! I will keep anyone interested in this project posted on this platform. Cheers
Join the Seeed Vision Challenge and Explore Conservation Tech!
10 April 2024 9:15am
WILDLABS AWARDS 2024 - Enhancing Pollinator Conservation through Deep NeuralNetwork Development
7 April 2024 5:55pm
Job Opportunity - Wildlife Protection Solutions
5 April 2024 9:22pm
Completely irrational animals...
3 April 2024 7:13pm
3 April 2024 9:09pm
3 April 2024 9:20pm
4 April 2024 5:05am
Rescuers hope AI will help reunite orphaned whale with its family in B.C.
3 April 2024 4:43pm
Finwave, while currently in beta-testing, is being used to help reunite an orphaned whale with its family.
EcoAssist - Free AI models for camera traps photos identification
3 April 2024 7:16am
2 May 2024 6:48am
This is an interesting discussion and something we've been grappling with at Conservation Evidence. We have begun a project to build an AI-assisted evidence synthesis pipeline using LLMs primarily to make the process of evidence synthesis more efficient as a workflow (from finding and classifying relevant scientific studies testing conservation actions, to tagging key information to speed up the writing of evidence summaries). The ultimate goal is to build a living evidence database that is able to keep up the the rapidly growing scientific literature.
Specifically regarding Chat-gpt (at least 3.5) we have found it's very poor at providing evidence-based answers to questions on conservation. The most worrying thing is that if decision-makers are using it out-of-the-box, they may feel they're getting an authoritative answer and this may exaggerate the issue of overconfidence. It often makes up sources or at least suggests it got info from a study that isn't relevant. Of course this may change with future iterations, but currently I fear it's being used without proper safeguards or knowledge of its limitations, specifically for decision support. We are trying to build a more credible NL interface, fine-tuned on the CE database that will have a built in verification model that checks the evidence sources provided, and would tailor answers based on a user's location. The challenge is reducing hallucinations as much as possible and whether ethically this is still acceptable. Ultimately, my feeling is if we don't try to build something more credible, with better safeguards, people will end up using naïve LLMs that are worse and will lead to bad decision-making.