Project LIFEPLAN is a joint attempt at making global sense of patterns and processes in biodiversity. We now aim to advance a key objective of LIFEPLAN, i.e. to develop global, automated software for bird sound identification. To do so, we need you – and below you will learn how. By contributing even a tiny part of your specific expertise regarding bird sounds, you will contribute to pulling through this global initiative.
This is how: Within LIFEPLAN, we have just launched the citizen science project Bird Sounds Global (BSG). BSG now features a new web portal for annotating soundscapes. (Annotating means identifying the species vocalising in the recording.) The aim is to produce training and testing material for automated bird sound identification. The first batch includes 100 000 selected recordings from 80 sites around the world, and will be available for all registered users. The recordings are ten seconds long, and in total cover 2000 species. If you can annotate even a small part of recordings, that will help us immensely.
The end results will be used to identify bird sounds collected as part of the LIFEPLAN research programme, which aims to explain global biodiversity and its driving factors. And also produced identification models will also be made openly available for everyone.

We warmly invite you to start annotating the soundscapes at https://bsg.laji.fi/identification/ and please share this link widely with other bird (sound) enthusiasts.


In the first phase of BSG, we previously asked users to create and validate sound templates of the world's bird species. The global data collected in that first phase of BSG is now put to good use, in training a global base model for bird sound recognition. In the new phase of the project, we still need to annotate local soundscapes. This added resource will be used for adapting the base model to a specific location. Our pilot work has showed these annotated local soundscapes to be extremely important role for fine-tuning location-specific models, and thereby identify the local bird species with much higher accuracy. Even 50 confirmed vocalisations per species can substantially improve the model performance for a given location.
- (Here is an example paper made from our pilot project "Improving Template-Based Bird Sound Identification")

If you have any questions, feel free to contact me!
Again, the link to the BSG web portal is here, try it out: https://bsg.laji.fi/identification/instructions
https://twitter.com/birdsoundglobal
https://www.facebook.com/Bird-Sounds-Global-105349581975160
Sebastian Andrejeff
Coordinator of Bird Sounds Global
11 June 2022 1:41pm
Really cool platform! Will definitely add it to our next update of the Conservation Tech Directory. Curious, could there be cross-platform integration with something like Xeno-Canto or Macaulay Library for example? Just thinking of other existing crowd-sourced/citizen science platforms for bird calls, integration with which might facilitate even quicker detections and better models!
Also just FYI, the link for the paper on "Improving Template-Based Bird Sound Identification" does not work - it goes to a site that says 'Page Not Found'.
11 June 2022 6:10pm
In reply to carlybatist
11 June 2022 1:41pm
Really cool platform! Will definitely add it to our next update of the Conservation Tech Directory. Curious, could there be cross-platform integration with something like Xeno-Canto or Macaulay Library for example? Just thinking of other existing crowd-sourced/citizen science platforms for bird calls, integration with which might facilitate even quicker detections and better models!
Also just FYI, the link for the paper on "Improving Template-Based Bird Sound Identification" does not work - it goes to a site that says 'Page Not Found'.
Xeno-canto's data would fit in the Validate species template section of BSG (https://bsg.laji.fi/validation/instructions), and actually we have communicated with XC people that their data can be added next time the library is updated. At the moment there is data from Macaulay library.
And how about annotating soundscapes? The sound recordings found in the portal are from Lifeplan project's sampling sites (https://www2.helsinki.fi/en/projects/lifeplan/instructions), and collected through AudioMoth acoustic loggers. The annotating data collected from said recording sites is used to create and fine-tune location-specific models, and thereby identify the local bird species with higher accuracy. For this purpose we need annotated recordings from the specific location collected with the same type of microphones and that is why Xeno-Canto, Macaulay, etc. recordings are not directly applicable in this context. But data from that kind of databases would be extremely useful for building the global identification models.
Carly Batist