discussion / Acoustic Monitoring  / 31 March 2023

Accessible acoustic analysis tech for blind scientist - ideas?

Hi all - I'm mentoring a student who is blind on a bioacoustics project to classify killer whale calls. Anyone out there have any experience with or ideas about making spectrograms and/or acoustic measurements accessible to unsighted people?  They are doing awesome classifying by listening - just need to think about next quatitative steps. The student is excellent at advocating for themselves and solving problems but would love extra input from the hivemind

I've got some major vision problems myself so I can relate to their problem. I want to do call recording for cryptic nocturnal birds mostly, but analysing the call recordings has proved difficult.

.I have looked at the Arbimon site, but the process is rather complex to upload files and create call templates. Especially when I can't navigate computer screens as well as I used to. I recently found out about the BirdNet site from Cornell Uni. This uses AI to identify calls against a library of known species. I've processed some recordings and the process is ridiculously easy. Unfortunately. The IDs are mostly US based birds. There doesn't seem to be a way to add your own known calls to identify in recordings, but they are open to collaboration to help develop the system. They even have the app software available for download from GitHub. It could be worthwhile to see if you can modify their software for your purposes.

Hey Kate I don't have any explicit ideas at the moment but will think on it - such a cool concept! I wonder if there's some form of tactile spectrogram that you could develop, or a way that you could have them try to draw a representation (so assign some shape) of what they are hearing as a means of classification a call and then clustering those shapes? There's a way of creating an image using foam and paper I believe so they could have a tactile record of those shapes. Guess it depends what you want to the quantitative results to be. Super cool that you are looking for greater accessibility in acoustics!!!

Hi Kate, 

We've developed a screen-reader workflow for a classification problem on our "Ocean Voices" Zooniverse project, which simply asks folks to label sounds based on what they hear and omits the spectrogram altogether. There are lots of screen-reader users who are active in the Zooniverse Talk forums, so they may have valuable input for you as well. 

Once a person has labeled data, I wonder if they could run automated detectors over the data in Pamguard, calculate features (using something like the R package PAMpal), and then use the BrailleR package to explore the statistics in R. This article has a pretty interesting summary about statistical software for visually impaired folks - might not be news to your student, but I thought it was pretty cool. 

I'm very curious what our friends who are visually impaired might notice in the acoustic data. Best of luck to you and your student! 




Hi Kate- ARISTA Lab (Advanced Research in Inclusion & STEAM Accessibility) is actively working on this through their eclispse soundscape project. I recommend reaching out to MaryKay to get the latest on their project.