Conservation Technology Database
18 July 2019 4:42pm
13 April 2020 5:56pm
I wasn't familiar but thank you for sharing this!
SURVEY: “Science and technology application for Wildlife Conservation”
13 April 2020 5:21am
WILDLABS Tech Hub: WWF PandaSat
13 April 2020 12:00am
Locally-Brewed Conservation Technology from a Small Town in North Bengal
10 April 2020 12:00am
Con X Tech Prize: Funding for early-stage conservation tech ideas
2 April 2020 3:51pm
A community response to help support the Australian bushfire crisis
4 January 2020 11:01pm
13 January 2020 5:02am
Hi Tom,
okay good news, Aaron's been able to track down the designs for you. He's going to drop them in the slack chat - he'll be the person you can ask questions if needed as well.
Steph
18 February 2020 10:17am
Hi all,
I've put together a short list (below) that I've pulled from the Slack channel related to long term and short term projects that were noted or posted.
It's clear that one need is to identify if a quarterly meeting (to see where various projects are vs start anything new) is needed, or that we see if anyone in the community is able fill a voluntary co-ordination role to check up on various projects. One thing at the back of my mind is to look at this from the perspective that if new fires start at scale what could we have in place, or how could we react. That may drive forward a few ideas and help us to focus on what we can do in the short term as there are plenty of project discussions around physical water feeders etc.
Here's what I got from the Slack channel;
Long term
- Work with Conservation Volunteers Australia to establish projects suitable for volunteers, especially things that can be done remotely, but also out in the field.
- Develop a small but skilled and experienced "brains trust" that can provide input for any environmental project / organisation which needs 'tech smarts' but doesn't have them in house.
Short term
- Australian Citizen Science Association - short term priorities are replacing lost nest boxes and getting more she-oaks in the ground. That has to start happening soon to be useful for the 2020 breeding season (the boxes, not the trees obviously).
- Continue to assess recovery program staff needs and the recovery team's priorities
- Zooniverse want to test ALA data sharing, using air quality data can involve remote volunteers without danger
- Setting up field cameras in burnt areas, to identify remaining wildlife or ferals (Zooniverse, ALA/DigiVol)
- Recording wildlife water point or feed locations and monitoring for maintenance, ie refills. (Kobo, other app?
- Setting up shelter tunnels in burnt areas - and recording locations, with possible addition of field cameras (Zooniverse, ALA/Digivol).
****
2 April 2020 3:35pm
Hello everyone. I hope you're well and your friends and families are safe during the Covid-19 crisis. I know that many of us are out of sync during the shut downs around the world so it isn't the easiest of times to think back on the bush fires, but also at the back of mind is the recurrence again and the recovery still underway / getting ahead of time with solutions. There is however a new opportunity that has come to light. The Australian Government has opened an application form for a $100k - £1m grant, with the desc;
The Wildlife and Habitat Bushfire Recovery Program will provide funding to support the immediate survival and long-term recovery and resilience for fire-affected Australian animals, plants, ecological communities and other natural assets and their cultural values for Indigenous Australians.
This may be the opportunity we need to move forward and progress some of the ideas we all noted. We will need Oz group WildLABS orgs and contacts on the ground, but if feel you have the capacity at this time to support a submission and be a part of it, then get in touch.
Webinar 11PST 3/20 - Deep Learning for Airborne Tree Detection
17 March 2020 1:28am
17 March 2020 3:52pm
Thanks Ben. I'll see what I can do.
24 March 2020 9:57am
Hello Ben. Unfortunately I couldn't make it on Friday. It would be great if I could take a look at your slides. I'm interested in trying to count mangrove trees. I have some WorldView 2 data. Do you think I could use DeepForest for this?
31 March 2020 4:21am
DeepForest docs are here.
https://deepforest.readthedocs.io/
Welcome to have a look. My experience is that individual trees cannot be distinguished in satellite imagery. The coarest resolution we've had success with is 0.3m. However, the deepforest weights may still useful as a starting location. If there are visible objects in your image that you want to detect, collecting a few hundred training data samples and retraining the model for 2-3 epochs could be useful. See the link for details. Happy to help, submit issues on the github repo is something isn't clear/doesn't work. Everything is in dev.
Connecting to MBARI's Deep-Sea Instruments
31 March 2020 12:00am
WILDLABS Community Call Recording: Rainforest X-PRIZE
30 March 2020 12:00am
Protocols for IDing big batches of camera trap data
5 March 2020 10:38am
19 March 2020 10:01am
Hi Morgan and Tim,
Thank you so much for these resources, I will go through these and get back to you with any questions.
Best,
Michelle
28 March 2020 6:22pm
Hi Michelle,
I had a group of undergrads help me with a 40,000-image dataset a few years back. We used the TEAM network Wild.ID program, so each photo that was tagged indicated who tagged it. That was helpful for checking quality later on. For our common, unmistakeable species (e.g. whitetail deer), I didn't require a second identification, but for more challenging groups (foxes, mustelids), I would often have a second person review the ID, or do it myself. Later on, I had a student go through all the tagged images of a particular species (gray squirrel, etc.) and verify the first ID. I found that some of the undergrads were very reliable in their ability to ID the species, whereas some other students needed to have their work checked more meticulously. I later thought of the idea of building a training set of say, 100 photos, to have each student run through to get a sense for their familiarity with the species, but also their ability to handle the more tricky scenarios that come up often in camtrap datasets.
Most folks could only handle 1-1.5 hours of continuous tagging. I had a few enthusiasts who would go for 2 hours straight, but that was rare. We logged effort in a shared google spreadsheet, where the students noted the dataset they worked on, any issues that came up, and any individual images that needed a second check.
I also tried to set up a more ergonomic workstation for folks (multiple monitors raised up, ergonomic mouse, etc.). Since the motion is so repetetive, easy for folks to develop carpal tunnel syndrome.
If you are dealing with a much larger dataset, you might want to look into more sophisticated AI/automation methods, but for a smaller project, this was doable. If you have a university connection, you can often recruit folks through chapter groups of The Wildlife Society. Student are often eager to gain experience, although many don't stick with it once they find out how unglamorous it is!
Good luck!
-Andy
29 March 2020 5:33am
Hi Tim,
Your diagramme shows a USB connection between the camera and the RPi. What kind of camera is it? Also, does this mean the RPi lives with the camera in the field?
Very interesting work.
Thanks,
-harold
Bipod suggestion
28 March 2020 9:31am
Open, challenging dataset for audio classification
27 March 2020 10:52am
27 March 2020 11:51am
Hi Radek,
I'm sure others can help here, but check out our recent virtual meetup (it'll be posted here in about an hour), the speakers - particularly Dave Watson - shared open datasets that might be what you're looking for.
Over on Twitter, Jesse Alston is collating a google sheet so that people can advertise data sets that grad students can use to finish theses. @arik 's reply here might be of particular interest: 'We have been recording 24/7 soundscapes in remote US locations like Yellowstone NP and rural central Wisconsin with multiple GPS synced recorders. Our goal is to study wolf and coyote vocalisations, but if anyone can make use of these data for their own studies, drop me a line!.'
Hope this helps!
Steph
27 March 2020 12:25pm
Steph, thank you so much for this, this is wonderful :) Really, really apreciate you sharing this with me :) Diving into all of the wonderful resources from you, thank you so very much for this!
Radek
Help collate list of Ecology/Conservation Data Sets for grad students
27 March 2020 12:07pm
Automated species detection from camera traps
30 January 2020 8:43am
25 March 2020 2:59pm
I see. Im interested and would like to help. I will need the images to train the network. As many as possible.
if you dont have them yet, try to find similar images preferably of the same species. I will use them to test the performance of the detection.
25 March 2020 6:49pm
I'm not familiar with camera traps, but there are a couple of options:
1) If the animals tend to cover most part of the image, then you can train a CNN classifier to distinguish between species (available with the keras-Tensorflow modules in Python)
2) If, however, the animals only cover a small part of the image (e.g. in the distance), it might be better to use an object detector (I've used YOLOv2 in the past for fish detection), which however is not that straightforward, especially with Python (I used MATLAB)
In any case, keras-Tensorflow classification with Python might be the most straightforward option for your goal. You should also certainly have a look at Google's Wildlife Insights platform which is specialized for species classification from camera trap images.
27 March 2020 10:33am
This can be done, happy to help :) But I think I need to understand the situation a little bit more.
Do you already have the data for training / inference? Do you have any example images with the species in them annotated? Say a still from the camera with a tiger and a csv file referencing that file and annotating that there is a tiger in the image?
Would you like someone to do the developing and training of the deep learning model for you? I work as an AI research engineer at the Earth Species project and I am also a part of a community of deep learning practitioners where we apply cutting edge research to various problems. Here you can check a little initiative I started a couple of days ago to teach people how to work with audio (there is a related forum thread but unfortunately it is in closed forums for the time being as it is associated with a course that is under way). My main point is this - if you have the data and would like someone to help you out on the modelling part, I can coordinate this.
Alternatively, if you cannot release the data, I can point you to materials that can get you started to carry out the work yourself.
Acoustic monitiring virtual meetup recording
27 March 2020 10:14am
COVID19 VIRTUAL HACKATHON, MARCH 26-30
26 March 2020 6:02pm
Mapping to Save our Planet's Biodiversity
26 March 2020 12:07pm
Webinar: Citizen Science Online
26 March 2020 12:00am
WILDLABS Tech Hub: Poreprint
26 March 2020 12:00am
Prior work on Bird Flock identification
22 March 2020 9:44am
23 March 2020 9:01pm
Hi Andrew,
Dan here—I'm one of the authors of the TinyML book! I love your Withymbe project; I've previously done work involving embedded systems and insects, and it's interesting to hear about your plans for bird flocks.
As long as you have sufficient data, you should be able to identify different bird sounds and discern them from background noise. The TinyML book has a chapter that introduces the underlying techniques, and I'd also recommend taking a look at www.edgeimpulse.com - we've built a set of tools designed to make it easy to train these types of models.
We actually recently published a tutorial on Wildlabs about this very concept:
https://www.wildlabs.net/resources/case-studies/tutorial-train-tinyml-model-can-recognize-sounds-using-only-23-kb-ram
I'm always excited to learn about new applications; feel free to reach out if there's any way we can help. I'm [email protected].
Warmly,
Dan
24 March 2020 3:49am
Just guessing but I don't think it will make much of a difference, individual or flock. The spectrogramme will look much the same, and I think that is used as the input vector to the CNN. If so then I would expect the model will be quite tolerant of flock size. Just spitballing here though.
24 March 2020 7:23am
Hi Harold!
Great to know you are in the domain. To be honest my analysis so far indicates that when conducting a DSP approach on the spectrum, smoothing via convolution becomes an issue? Basically, the raw spectrum is too jagged to match, so one convolves it to smooth it, but then one just gets a generic "noise"-shaped spectrum. I also have variances in sampled spectra from the same source recording? I am using an fs=44100 and a spectrum 0 - 64kHz initially, or though I tried to filter from 100 - 9k with little success?
My design outline is: I need to identify the presence of a flock of a certain species of avians, I need to know when the flock is not present, and I need to distinguish the presence of other flocks of birds, not to identify them, but they are sometimes similar in size and possibly, therefore, call range? A sort of "We - Not We" approach?
I am comparing the gestalt sound, not individual calls?
Plus: I am using a Rapsberry Pi for the Fog Node currently, but see that I can use my Arduino Uno for TinyML from the examples which use a Nano? I am interested in the power-saving, but need a robust microphone rig, which I currently get via usb?
I will checkout your tutorial, many thanks!
Tally ho!
Andrew.
Virtual Field Trip: Conservation Technology with Shah Selbe
24 March 2020 12:00am
Online Workshop: Conservation Technology
23 March 2020 12:00am
Mobile App for Illegal Ivory Sales
22 March 2020 10:56pm
Illegal Ivory Sales on the Internet App
22 March 2020 10:52pm
Esri - Mapping to Save our Planet's Biodiversity
19 March 2020 10:46am
Technology Showroom of Artificial Intelligence (AI) aided Elephant Early Warning Systems
6 March 2020 6:09pm
19 March 2020 9:15am
Hi @Tim+Vedanayagam
Thank you for posting this. I'd be happy to contribute to the thermal sensing work under way. Can you confirm - have you built a thermal AI model and trained / labelled data for a particular camera?
We have been training a model for low cost (Lepton 3.5) thermal cameras via a challenge with WWF / Wildlabs and have 30,000 labelled images as our training dataset of Asian elephants. We're focusing on Deeplabel and YOLO with a plan to port to Tensorflow and it will be open source, so applicable for others to use and adopt in their early warning systems that use thermal.
More info here - https://www.zsl.org/blogs/conservation/zsl-whipsnade-zoo-becomes-a-space-for-high-tech-wild-elephant-conservation
Kind regards,
Alasdair
Machine learning fish monitoring and the seafood sector
17 March 2020 3:29pm
17 March 2020 7:58pm
Do the people approaching you have defined porblem statements or use cases? That's been one of the biggest challenges in scaling high-tech fisheries monitoring from either the public or private side. Unless there's a mandate to use it (which there is in Australia and the EU) the ROI is usually too low for individuals or companies to invest in it, and the potential markets are too small. Check out this CEA/TNC report for more scoping. http://tnc.org/emreport
18 March 2020 12:48pm
Thanks Kate - that's really helpful. The company in question are investors in an emerging high-end aquaculture venture and I assume their interest is around utilising individual fish tracking to drive greater efficiency i.e. to adjust feed inputs, estimate growth rates, detect disease etc. all of which seems to be the intention of the Tidal Project. I'll get back to them with more questions and make some onward connections. If anyone else in the community has any linkages - please drop me a line on here!
WILDLABS Virtual Workshop Recording: Running Engaging Events on Zoom
18 March 2020 12:00am
Enter the Zooniverse: Try Citizen Science for Yourself!
18 March 2020 12:00am
Webinar: IIED Community-Based Approaches to Tackling Poaching and Illegal Wildlife Trade
17 March 2020 12:00am
27 August 2019 9:51am
Are you familiar with the Conservation Evidence database? https://www.conservationevidence.com/
It is a searchable database of conservation actions (including but not limited to technological solutions), categorised by effectiveness with relevant references. To me it sounds a lot like what you are looking for.