discussion / Human-Wildlife Conflict  / 16 January 2019

Citizen scientists to analyze HWC interventions

Hi all,

I came accorss this interesting website of the organisation 'Bring the elephant home', which they use to involve sitizen scientists into their project on beehive fencing. https://www.zooniverse.org/projects/antoinettebteh/elephants-and-bees-camera-watch/about/research

The idea is that anyone views the videos of elephants near a beehive fence and provide feedback on what they see; # of animals, age classes, sex, behaviour, etc.

Has anyone of you ever worked with citizen scientists like this before? Has it worked? Can imagine that you can process large quantities of data in a short time. But the downside is that these people are no scientists or might not be very serious while filling in the sheets. How do you deal with this?

Best regards,

Femke




Hi Femke,

At the Biological records Centre we have been tackling these issues for over 50 years! You are right to point out that with citizen scientists you can collect/review a lot of data in a relativly short period of time. Our iRecord system collect 1 million observations in 2018, and just look at how many people are reviewing images over in the zooniverse (check out snapshot serengeti).

You are also right to point our that without strict protocols, and with varying abilities, the use of citizen scientists can introduce biases, which often leads to the impression citizen science data is of lower quality. There are ways to account for this statistically when you are working on data collected in the field, and for those reviewing images online both the zooniverse and iSpot have systems in place, such as reputation and multiple reviews of a single image to reach consensus.

There is a good report on using citizen science here.

Best,

Tom

I agree with Tom's comments. A project I work with has used Zooniverse to identify animals in camera trap images, We include a field guide that helps reviewers with making trickier distinctions such as deer versus elk (challenging in partial views with IR images). We require each image to be identified by thirty reviewers before scoring it. That allows us to either be quite confident in an identification or to recognize it requires expert review. We have seen no examples of intentional misidentifications. The biggest problem is coming up with enough images to meet demand--some people will work for hours!

Thanks both for your comments, very interesting indeed! Also great to hear that so many people are eager to get involved in this kind of research and will participate with great enthusiasm.

Bets regards,
Femke