Group

Acoustics / Feed

Acoustic monitoring is one of our biggest and most active groups, with members collecting, analysing, and interpreting acoustic data from across species, ecosystems, and applications, from animal vocalizations to sounds from our natural and built environment

discussion

Needing help from the community: Bioacoustics survey

I'm reaching out because I'm currently conducting a research project titled "UX-Driven Exploration of Unsupervised Deep-Learning in Marine Mammals Bioacoustic Conservation" for my...

2 0

Was great to chat with you Sofia and I would encourage others in the Acoustics community to help provide input for Sofia's study!

Thank you so much for your encouraging words! I'm thrilled to hear that you enjoyed our conversation, and I truly appreciate your support in spreading the word about my survey within the Acoustics community. Input from individuals like yourself is incredibly valuable to my study, and I'm eager to gather as much insight as possible. If you know of anyone else who might be interested in participating, please feel free to share the survey link with them. Once again, thank you for your support—it means a lot to me!

Best regards,
Sofia

See full post
discussion

Voice activated recording devices on satellite collars

Hi everyone,I'm cooperating with a project that will be placing satellite collars on Eurasian lynx and their prey species. I have a PhD student starting this year who is...

21 0

I am sure Simon can chime in with exact specifications. I do not have it with me now. The centre distance between the attachment holes at each in end is 20mm wich will fit the standard holes in a collar from Vectronics Aerospace. 



Simon posted images of the logger attached to a collar on a spotted Hyeana here: https://twitter.com/chamaillejammes/status/1441657479612542990

We are studying muskoxen which are exposed to a polar night of between 3-4 month where the sun does not come over the horizon. On the other hand it also means that it will be continously OVER the horizon during the 3-4 summer months.

We are keeping an eye out for kinetic energy harvesting and there has been some interesting progress recently:

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285930

@M_Stanton , you provide a nice list of uses of animal borne audio. We could add environmental sounds both abiotic and biotic.

We would LOVE to use audio more for ground truthing behavioural states and we would LOVE if the audio recordings could be GPS time synced...

@jared , it sounds interesting with the mentioned increased capabilities of the Iridium system. Can you link to any references for that?

Cheers, 

Lars

 

See full post
Link

ISPA: A New System for Transcribing Animal Sounds

These researchers are introduce the ISPA (Inter-Species Phonetic Alphabet) as a new way to precisely interpret and transcribe animal sounds. By using text to represent sounds, existing human language machine learning models could be used more successfully in field research.

0
discussion

Passionate engineer offering funding and tech solutions pro-bono.

My name is Krasi Georgiev and I run an initiative focused on providing funding and tech solutions for stories with a real-world impact. The main reason is that I am passionate...

2 1

Hi Krasi! Greetings from Brazil!



That's a cool journey you've started! Congratulations. And I felt like theSearchLife resonates with the work I'm involved round here. In a nutshell, I live at the heart of the largest remaining of Atlantic forest in the planet - one of the most biodiverse biomes that exist. The subregion where I live is named after and bathed by the "Rio Sagrado" (Sacred River), a magnificent water body with a very rich cultural significance to the region (it has served as a safe zone for fleeing slaves). Well, the river and the entire bioregion is currently under the threat of a truly devastating railroad project which, to say the least is planned to cut through over 100 water springs! 



In face of that the local community (myself included) has been mobilizing to raise awareness of the issue and hopefully stop this madness (fueled by strong international forces). One of the ways we've been fighting this is through the seeking of the recognition of the sacred river as an entity of legal rights, who can manifest itself in court, against such threats. And to illustrate what this would look like, I've been developing this AI (LLM) powered avatar for the river, which could maybe serve as its human-relatable voice. An existing prototype of such avatar is available here. It has been fine-tuned with over 20 scientific papers on the Sacred River watershed.



And right now myself and other are mobilizing to manifest the conditions/resources to develop a next version of the avatar, which would include remote sensing capacities so the avatar is directly connected to the river and can possibly write full scientific reports on its physical properties (i.e. water quality) and the surrounding biodiversity. In fact, myself and 3 other members of the WildLabs community have just applied to the WildLabs Grant program in order to accomplish that. Hopefully the results are positive.



Finally, it's worth mentioning that our mobilization around providing an expression medium for the river has been multimodal, including the creation of a shortfilm based on theatrical mobilizations we did during a fest dedicated to the river and its surrounding more-than-human communities. You can check that out here:



 

https://vimeo.com/manage/videos/850179762



 

Let's chat if any of that catches your interest!

Cheers!

Hi Danilo. you seem very passionate about this initiative which is a good start.
It is an interesting coincidence that I am starting another project for the coral reefs in the Philipines which also requires water analytics so I can probably work on both projects at the same time.

Let's that have a call and discuss, will send you a pm with my contact details

There is a tech glitch and I don't get email notifications from here.

See full post
discussion

Owl call detection software

I’m curious about AI software for analyzing nocturnal bird calls, particularly for detecting owl species. I currently use Nighthawk for help with processing my ARU audio files,...

8 0

I have a question about Arbimon. I'm working on a project looking for bird use of wet meadow (and associated matrices of habitat). We have two bird lists we've created for BirdNet, a "Master List" of all species (to get an understanding of community data as per input from Indigenous partners) and a "Focal Species List" as per the land managers in put. I will have volunteers doing manual verification + passive listening to attempt to catch false positives and species BirdNet has missed. I recently learned about Arbimon from the Soundscapes to Landscapes project and I'm curious about the audio detector function. Is it detecting spectrograms/sonograms from a provided classifier or does it function similar to BirdNet where we can tell it which species to look for? 

We are currently working on Eurasian Pygmy, Tengmalm's and Tawny owl calls recognition. It's not a trivial task if you want to include different call types (male, female, pair, chicks), that's why we started with only 3 species. If you are interested in these 3 European species, drop me a line. 

Hi Teresa, 

Thanks for your interest in Arbimon! The platform has a couple different analysis tools that range from unsupervised (like audio event detection & clustering, or AED-C) to semi-automated (pattern matching, random forest). We've got lots more info about each in our support docs here

The AED-C is an unsupervised machine learning model, so you aren't providing any labels (though the validation page allows you to assign events or clusters as particular species after the fact). The pattern matching is a cross-correlation template matching function where you provide 1 template (example of the species-specific call you're looking for) and the algorithm looks for matches similar to that template. Random forest is a decision-tree-based machine learning model where you provide training clips (presence & absence clips for a species) which the model uses to learn how to classify that species' call. We have developed a number of CNNs (like BirdNet is) but they have more of a regional focus (e.g., one for Kenya, one for western Sumatra, etc. etc.). Right now we run these on the backend, but we are currently working on a public-facing CNN page that we hope to phase in this year.   

Hope that helps, but feel free to reach out if you have more questions! You're welcome to also email me directly at [email protected] .

All the best,

Carly 

See full post
discussion

Detection and removing of windy events in wild acoustic recordings

Hello to everyone, I have to clean my dataset of recordings concerning an African penguin colony inhabits the South African coast. In particular, since I have recordings with days...

9 0

Hi everyone! 

@baddiwad was one of our fantastic speakers in our June Variety Hour show, so we had the chance to hear about her work in a lot more detail. If you're interested in finding out more about Franscesca's project, catch up here: 

Audacity has a noise filter which one 'trains' on a piece of recorded noise. Perhaps it is worth a shot. Freeware, open source, and with a community of developers and users.

Hi Francesca! 

 

Did you managed this problem somehow? Can you post the workflow or the solution that worked for you?

See full post
discussion

Power managment/Recharging System and Communication System

As we know Power managment/Recharging System and Communication System are chalanges for forest, so any one please suggest the Device and Power source to monitor sound in forest...

15 0

Power usage for microcontrollers with solar is  much more manageable. For Raspberry Pi's and higher it gets expensive and big.

I'm quite impressed by the specs from the Goal Zero Yeti devices. This can have high capacity and be charged with Solar. Not small though. And the price is not in proportion to the Pi's.

So this 200x model for example, would be close to 16 days running the audio recorder. Let's say 10. without solar. Add solar? Depends on the size of the panels I guess. Power usage for mobile networking? Depends on how much you transmit.

Probably some well documented experiments would be really nice for people here. Sounds like something nice for the next set of grants :)

See full post
article

New paper - An integrated passive acoustic monitoring and deep learning pipeline for black-and-white ruffed lemurs in Ranomafana National Park, Madagascar

We demonstrate the power of using passive acoustic monitoring & machine learning to survey species, using ruffed lemurs in southeastern Madagascar as an example.

2 0
What an awesome paper! Loved learning about such a promising research tool in PAM combined with a CNNs, and that lemur vocalizations are termed as "roar-shrieks" :)
See full post
Link

Questionnaire for Pain Points and Needs in Bioacoustics

Hi! We're engineers eager to understand how technology can simplify acoustic work. If you use recorders, your input would be invaluable. Please consider taking our 5min survey. As a thank you, participants will be entered into a draw for a Audio Moth Recorder! Thank you so much!!

0
discussion

Monitoring setup  in the forest based on the wifi with 2.4 GHz frequency.

I am planning to setup the network using the wireless with frequency 2.4GHz. Can I get the the data for this signal distortion in the forest area?Is there any any special...

5 0

Hi Dilip,

I do not have data about signal distortion in a forest area and with the signal you are intended to use.

However, in a savannah environment, when I put a tour on the highest point of the park, Lora signal (avg 900MHz) is less distorted than WiFi signal (2.4GHz). This is normal as a physics law: the frequency determines the wave length, and the less the length (obviously the less the frequency), the less obstructed the signal.

So, without interfering with your design, I would say that in a forest configuration, WiFi will need more access points deployed and may be more costly, and in your context, even when using LoRa, you will need more gateways than I have in a savannah.

To design the approximate number of gateways, you may need to use terrain Visibility analysis.

To design the cameras deployment, you will need to comply with the sampling methods defined in your research. However, if it is on for surveillance reasons, you may need to rely on terrain visibility analysis also.

Best regards.

I've got quite a lot of experience with wireless in forested areas and over long(ish) ranges.

Using a wifi mesh is totally possible, and it will work.  You will likely not get great range between units.  You will likely need to have your mesh be fairly adaptable as conditions change.

Wireless and forests interact in somewhat unpredictable ways it turns out.  Generally, wireless is attenuated by water in the line-of-sight between stations.  From the Wifi perspective, a tree is just a lot of water up in the air.  Denser forest = more water = worse communications. LoRa @ 900Mhz is less prone to this issue than Wifi @ 2.4Ghz and way less prone than Wifi @ 5Ghz.  But LoRa is also fairly low data rate.  Streaming video via LoRa is possible with a lot of work, but video streaming is not at all what LoRa was build to do, and it does it quite poorly at best.

The real issue I see here is to do with power levels.  CCTV, audio streaming, etc are high data rate activities.  You may need quite a lot of power to run these systems effectively both for the initial data collection and then for the communications.

If you are planning to run mains power to each of these units, you may be better off running an ethernet cable as well.  Alternatively, you can run "power line" networking, which has remarkably good bandwidth and gets you back down to a single twisted pair for power and communications.

If you are planning to run off batteries and/or solar, you may need a somewhat large power system to support your application?

 

I would recommend going with Ubiquity 2.4Ghz devices which have performed relatively well in dense foliage of the California Redwood forests. It took a lot of tweaking to find paths through the dense tree cover as mentioned in the previous posts. 

 

See full post
discussion

Audiomoth Bat Call Triggering Settings

We are considering buying audiomoth for recording bat calls for our Citibats Cambodia project[1]. I would like to learn about your experience of using Audiomoth for record bat...

2 0

Nils Bouillard (@Nilsthebatman) would be good to talk with! 

Adrià López-Baucells also has lots of useful info on his website.

See full post
careers

Program Officer - Bioacoustics, WILDLABS

Come join our team! We're looking for a Program Officer to join the WILDLABS Community, hosted by WCS in Argentina. This role will support our research program, with the chosen candidate leading our horizon scanning...

3
See full post
discussion

Recycled & DIY Remote Monitoring Buoy

Hello everybody, My name is Brett Smith, and I wanted share an open source remote monitoring buoy we have been working on in Seychelles as part of our company named "...

2 1

Hello fellow Brett. Cool project. You mentioned a waterseal testing process. Is there documentation on that?

I dont have anything written up but I can tell what parts we used and how we tested.



Its pretty straightforward, we used this M10 Enclosure Vent from Blue Robotics:

 

Along with this nipple adapter:

Then you can use any cheap hand held break pump to connect to your enclosure. You can pump a small vacuum in and make sure the pressure holds.

Here's a tutorial video from blue robotics:

 





Let me know if you have any questions or if I can help out.

See full post
discussion

Looking for a Supervisor/Research Group - ML-driven Marine Biomonitoring

Hi everyone, I am a final year MEng Computing student at Imperial College London interested in improving marine biodiversity monitoring with machine learning.I have...

2 0

Hi Filippo, 

Nice to read your message. Have you thought of contacting anyone in the Bioscience department at UCL? In our group "the People and Nature Lab", a few PhD students (Ben and Jason) are working on ML methods for coral reef monitoring. Might be interesting to reach out to them. List of People at CBER.

Best, Aude

 

See full post