discussion / Acoustics  / 4 August 2020

Advice on a Master's project

Hi all,

I’m posting here to ask for a some advice. Sorry in advance for the long post.

I’m currently studying for an integrated masters in Electrical and Information Engineering at the University of Cambridge. I’ve been lucky that my supervisor has been very open with collaborating with me to design my final year masters project. I’m really interested in conservation and biodiversity monitoring and would love to design a project around this. Unfortunately, though, the engineering department hasn’t ever done a project in this area! The project doesn’t have to break the internet from my point of view, personally I’d like to spend my year working in this field and learning something really useful, rather than researching hardcore engineering.

My supervisor is very interdisciplinary and knowledgeable in product development and we are both trying to learn the current issues in conservation. I have been reading papers and trying to get a lay of the land. We are thinking that acoustic monitoring seems like an interesting area that could provide a project that is attainable given the time frame of the Master's project as well as "technical" enough to be considered an engineering Master’s. My experience is in analogue and digital electronics, some processor coding and then more experience in signal processing, inference and ML.

I’m trying to contact knowledgable people outside the engineering faculty to help with generating ideas / bring some expertise to the process.  Currently we are considering ideas of:

1. Localisation of sound (in 2D or 3D).

I know there is some work already in this area. The list of bioacoustics software recently posted in this group has some links to them. I am wondering if there is scope to improve these. For example some use DOA analytical models - is there a way to achieve localisation through ML methods? Or through networked sensors? Or by localising in real-time so that a camera could be moved to take a photo of the noise?


2. AudioMoth

I love the idea of the AudioMoth and the low-cost, lightweight, low power applications that drive it's design. I found their original paper great and the most recent paper about low-power detection algorithms fascinating. There are a lot of papers out there throwing around extremely complex variations on neural nets and I really liked how the AudioMoth team targeted the design of these algorithms to the problem!

I like the design parameters of the problem the AudioMoth is trying to solve. What I have been struggling with is then coming up with a suitable project that involves the AudioMoth. One idea was to integrate (from a paper that came out earlier this year) an analogue, programmable pattern recognition circuit with micro-watt power dissipation. Or attempting to expand upon the low-power detection algorithms.


The project runs for 16 weeks of term time, with 6 weeks of vacation in the middle to work on it as well. The final few weeks are generally reserved for exam study and project write up.

I realise this is quite a large dump of thoughts and I would be so grateful for any help, resources that you could point me towards or thoughts on our current ideas.

All the best,


Hi Harry,

Thanks for the interest in AudioMoth. We'd be happy to collaborate on a project. A couple of ideas:

1) I've recently had an MSc student looking at porting NN models from Edge Impulse to AudioMoth. The end result was a proof of concept that this works. A further project could look at the complexity of models and improve the tooling to make it easy to capture recordings, label them and then generate runnable models that fit on AudioMoth. 

2) Combining the two areas you mentioned above, and a bit more speculative and open-ended, we have a version of AudioMoth with a GPS module which provides very accurate synchronisation of recordings - to about 1us - compared to 20us inter-sample spacing at 48 kHz. This allows an array of AudioMoth to be deployed without any interconnection to do 3D localisation. No one has really explored the use of this yet and I think that might make a really interesting project; particulalry combining inference of the source path with a sensible motion model using the correlations between recorded signals as noisy observations. Another setting would be multiple sources at the same time e.g. multiple birds singing during the dawn chorus. Could you localise all the sources? Could you then extract individual sources from the background and listen to just one bird?



Hi Harry,

May I ask what paper it is that described the "analogue, programmable pattern recognition circuit"?  I've been working sporadically on a matched filter which you could say is the digital version of what you describe.

I think there is a real need for such a thing.  My application is gunshot (poaching) detection + localisation.  There are established AI methods to detect a gunshot  but AFAIK none that will do so on a fine enough temporal resolution to realise localisation on an embedded processor.  If you work this angle I think it is certain to have a large positive impact.



Hi Harry,

The project isn't written up anywhere, it's in its very early stages.  But in investigating the various options, it became clear there's only one way to do it right now (that also conforms to operational requirements).

The paper you linked to is very interesting, I don't yet understand how they get that degree of selectivity from the filter bank, but the technique seems driven by practicality rather than hewing to the current AI trend.

It seems clear that at the moment there are no AI techniques that can do classification on the field detector at the kind of temporal resolution that TDOA requires, and at a decent power draw on an embedded system.  Therefore some other presumptive technique should be used, and the false classifications taken care of off line.  I'm going ahead with matched filtering even though it probably won't be pretty, because being only software, it means the hardware decisions can be nailed down now.

I'm not optimistic on DOA techniques mainly because of difficulty in accurately aligning the arrays.  There are some ways around that though.  Also I want to generalise to the underwater case, and underwater arrays are especially costly.



Hi Harry (and Alex and Harold),

These ideas are EXACTLY what conservation biologists are looking for! I work on passive acoustic monitoring with ruffed lemurs in Madagascar, and being able to localize where they're calling across a grid to model distribution/occupency/etc would be immensely helpful in monitoring populations and providing updated censuses. Ruffed lemurs exhibit contagious choruses, called roar-shrieks (I'm not kidding), so once one individual starts, everyone joins in and out-of-sight groups sometimes answer. I'm trying to see if you can reliably estimate chorus size (ie how many lemurs calling in a chorus) from some sort of acoustic measurement or index. If you could on-board that classification model with localizing, you could also potentially estimate abundance too. I've spoken with Dan at Edge Impulse about embedding models on Audiomoths/other devices and it seems a promising avenue.  

I'm a biologist by training so my engineering and programming skills are minimal and I have no idea if what I'm saying is possible or practical. But I have a bunch of soundscape data as well as in-person recordings of the roar-shrieks with known subgroup size and associated GPS points as "ground truth" data (we were doing full-day focal follows while ARUs were running). 

I'll stop going down the rabbit hole now, but I'm excited to hear that these types of techniques are being thought about and worked on! Happy to help if I can and always open to collab's. Can't wait to see what comes of your thesis, Harry!

All the best,