discussion / Acoustics  / 7 October 2016

Moonshots - Where will we be in five years?

Hello Community!

Friendly moderator here.

Things have been kind of slow around the board so I thought I would pop in and spark a conversation with all of you!

We all know acoustic monitoring is a very promising, growing field. A lot of great work has been accomplished in recent years as the technology has been developed to be better and better.

My question to all of you is - Where do you see the field going in the next five years? What do you hope to see your research projects accomplish in the next five years?

Let's hear your "moonshots"!

Good question. I work in the marine mammal side of things, specifically acoustic monitoring of harbour porpoises which vocalise at around 130kHz. I don't think I have the breadth of knowledge to answer your question for the entire acoustics field but what I can tell you are the advancements in the last five years or so which have revolutionised the way I work and what I reckon is important for the future in the marine mammal world.

Low power high frequency data acquisition. In 2009, when I started as a researcher , it was difficult to collect data at 500kHz+ sampling rates on multiple hydrophones. You could buy expensive specialised data acquisition cards from National Instruments etc. but then you needed specialised high frequency amplifiers and the whole rig would take up a lot of space and easily drain a car battery over a day or so. Now, we can record these high sample rates from multiple hydrophones on commercially available autonomous devices not much bigger than a coke can. e.g. http://www.oceaninstruments.co.nz/. That is step change in technology which has, in my opinion, big implications for the field. 

Software. This is perhaps a bit of self-plug here (I sometimes work as a developer on PAMGuard) but software, especially for high frequency sound analysis has come a long way. It used to be the case you couldn't even get windows media player to play a high frequency sound file. Programs like MATLAB were slow and buggy and PAMGuard was primarily still a real-time system for use on seismic survey ships and towed array surveys. Now, MATLAB, R and many other programming languages have much improved functionality and reliability, PAMGuard has a whole suite of new modules for researchers and generally, better software and hardware has made life a lot easier, even compared to just five years ago.

In terms of the future...

Pattern recognition. I reckon the next big challenge is getting past the problem of pattern recognition in acoustic data. I still haven't seen an algorithm which can pick out messy dolphin whistles or other complex tonal sounds from a spectrogram anywhere nearly as well as a human can. Same goes for click trains in noisy environments, classification of species etc. As the technology gets better we will be collecting more and more data and so highly automated methods of analysis are going to become more and more important. Loads of good work has been done already and looking forward, hopefully the advances in machine learning can keep dripping into mainstream passive acoustics. That means developing new detection and classification algorithms but also ensuring they are integrated into easy to use programs which can be accessed by a wide variety of researchers and industries, not just programming/tech junkies.

Storage and Big Data. Ok, not very exciting and sorry to buzz word, but, where do we store all the data we collect? I work on projects with hydrophone arrays which stream back a terabyte of data a day. Autonomous devices on gliders, buoys, research surveys etc. are all collecting large quantities of acoustic data and ideally, we want to keep it all so that when better analysis algorithms come along, we can run them on historic datasets. The second aspect to this is the availability of data. Maybe the deployment of a few acoustic recorders in the North Sea isn’t that interesting but the data from all recorders deployed over the last ten years might be. Working out how we have datasets from different institutes all in one place and open to all, with useful and standardised metadata will allow researchers to ask some interesting questions in the future. This is beyond the scope of academic institutions – it needs infrastructure on a national scale to work.

So, those are my thoughts. Please agree or disagree. I’m interested to hear what other people think.


Hi Courtney,

sounds are indeed a very interesting theme. If you ask me, it is one of the most underestimated senses, especially when we are talking about wildlife protection. People tend to make quite a lot of noices that distinguish them from animals ;-)

I am working on a prototype 'soundscape sensor'. The basic idea is to record all sounds at a particular location and calculate a kind of 'normalised' sound. This summary can then be used to listen to sounds that deviate from that. Could be chainsaws, gun shots, car engines, talking people, barking dogs, whatever. 

By feeding these sounds to rangers with local knowledge, or to a crowd (like Panthera is doing), the sensor's performance improves over time. 

Although we think our first prototype will be ready by the end of this year (recognizing one particular sound within an outdoor environment), subsequent steps are quite challenging. Especially when more than one or two sounds occur at the same time.

If you ask me where we will be in the next few years, I would say:

2017: recognizing any trained sound within a given field context (using a Raspberry)

2018: learning to distinguish compounded sounds (using a backend server)

2019:  idem, but then much more efficient, so we can run it on a Raspberry pi

Interested to help us to create this? 


Jan Kees

My moonshot would be increased used of DSP in underwater acoustic monitoring, to enable small arrays to filter out engine noise to look for vocalisations; statis reflections of exiting noise sources, and doppler reflections too. Heavyweight DSP might be enough to gauge the size of objects and a useful distance estimate too…