discussion / Acoustics  / 8 August 2024

scikit-maad community

 We, @jsulloa and @Sylvain_H, are the main contributors of scikit-maad, an open source Python package dedicated to the quantitative analysis of environmental audio recordings. 

As scikit-maad is getting more and more popular in bioacoustics and ecoacoustics, we believe that people (maaduser ? We need to find a name to define scikit-maad users :), any suggestions ?) should know and help each other to grow together as a community. For this reason, we decided to open a discussion thread to start talking each other, sharing problems, solutions and wishes.

This discussion thread is yours, ours. Enjoy !
 




Hi all, 

so as a first topic for this thread, I am going to share a question I had that was addressed by Sylvain Haupert. After he shared his answer I can report on some of the preliminary results.

The Topic and example I am referring to is the Detection Distance Estimation Example (https://scikit-maad.github.io/_auto_examples/1_basic/plot_detection_distance.html#sphx-glr-auto-examples-1-basic-plot-detection-distance-py).
Maybe you could help me get my head around 2 things or point me in the right direction.

As I understand it, to estimate the (frequency-specific) detection distance as it is done in the example, we need to at least know:
1. the coefficient of attenuation of the hmeasureabitat
2. sound pressure level of the background L_bkg 

For 1., the coefficient of attenuation of the habitat, you state a representative value of 0.02dB/kHz/m for rainforest. Did you measure that empirically or is it from the literature? If you on  that value, could you point me to a reference or maybe a couple of keywords how you would do that for a "new" habitat?

For 2., 2. sound pressure level of the background, I assume you "measured" that as it is described at "Estimate sound pressure level from audio recordings" in the documentation of the package. Right? And then out of the estimated sound pressure levels over time, you extracted just one best-fitting "time" for the pressure level of the background noise in the example above?
https://scikit-maad.github.io/_auto_examples/2_advanced/plot_sound_pressure_level.html#sphx-glr-auto-examples-2-advanced-plot-sound-pressure-level-py

As you can probably tell, I am trying to get estimates of detection distances. It is for a project, where we try to acoustically monitor Orthoptera in German grasslands. To have a robust understanding of the area/distance we can monitor actually, I am trying to estimate the detection distances first for frequency and then through an acoustic trait database for specific species.

Thanks for your help in advance and the great package.

 

I'll start the ball rolling. Last year I created a project, called sbts-aru, where I made a Raspberry Pi based platform that maintains sub-microsecond clock accuracy and aligns sound recordings strongly to this time line. It's constructed in a manner such that in addition to recording you could at the same time carry out real time sound recognition.

Gunshot localization springs to mind. Now one could build a ai model to identify the start time of a gunshot, but as gunshots are just big impulses, I did wonder if there was sufficient facility in your tool box to do this without a model. Certainly in some cases this has to be the case. That being the case, it would not be hard to add it all up and make a real time gunshot localization system that runs on a Pi.

What do you think? I would need to be able to identify the start of a gunshot to within around 2ms max I think. Over very large distances you could likely have quite a bit more leeway but as the platform allows you to establish (Manually) the start time to less than 1ms of error (jitter being the biggest variation here) less aim for small numbers.

Carly Batist
@carlybatist  | she/her
WildMon
Science Outreach Lead at WildMon; ecoacoustics, biodiversity monitoring, nature tech
Involvement Tier 4
WILDLABS Alumni
WILDLABS Event Speaker

Thank you for starting this and for creating such an awesome package!! I and many others I know use it a ton. And as a very amateur (and reluctant...haha) Python user, I appreciate the great documentation and tutorials!

Liz Ferguson
@eferguson
Ocean Science Analytics
Marine mammal ecologist and online technical trainer
Reactor level 1
Conversation starter level 1
Popular level 1

I am really happy to see this conversation thread started as well, thanks Sylvain!! We recently used scikit-maad for a marine environmental comparison and made it available on this BioSound dashboard - we decided to go with scikit-maad given the breadth of acoustic indices and provision of useful outputs. In short, we are now huge fans and although still learning Python, like Carly we also found the tutorial very helpful. Looking forward to using this discussion thread more often in the future! - Liz