Funding Opportunity / 

Competition: iWildCam 2020

Want to compete in the iWildCam 2020 competition identifying species in camera trap images to support biodiversity monitoring efforts and automatic species classification model improvements? Because the Workshop on Fine-Grained Visual Categorization has moved entirely online this year, the team entry deadline has been extended: Join the competition by May 19th to secure your spot! Final team submissions are due on May 26th. With nearly 100 teams competing so far to identify species, this year's iWildCam challenge promises to be exciting! Sign up today to find your team!


#iWildCam 2020 is getting close to 100 teams! There's still 27 days left to help monitor biodiversity by competing to identify species in camera trap images . Check it out on @kaggle: https://t.co/WoNSQUPODP pic.twitter.com/j8ucH7Krk2

— FGVC Workshop (@fgvcworkshop) April 29, 2020

Competition Timeline

May 19, 2020 - Team Entry deadline. This is the last day participants may join or merge teams.

May 26, 2020 - Final submission deadline.

About the Competition

Camera Traps (or Wild Cams) enable the automatic collection of large quantities of image data. Biologists all over the world use camera traps to monitor biodiversity and population density of animal species. We have recently been making strides towards automatic species classification in camera trap images. However, as we try to expand the scope of these models we are faced with an interesting problem: how do we train models that perform well on new (unseen during training) camera trap locations? Can we leverage data from other modalities, such as citizen science data and remote sensing data?

In order to tackle this problem, we have prepared a challenge where the training data and test data are from different cameras spread across the globe. The set of species seen in each camera overlap, but are not identical. The challenge is to classify species in the test cameras correctly. To explore multimodal solutions, we allow competitors to train on the following data: (i) our camera trap training set (data provided by WCS), (ii) iNaturalist 2017-2019 data, and (iii) multispectral imagery (from Landsat 8) for each of the camera trap locations. On the competition GitHub page we provide the multispectral data, a taxonomy file mapping our classes into the iNat taxonomy, a subset of iNat data mapped into our class set, and a camera trap detection model (the MegaDetector) along with the corresponding detections.

This is an FGVCx competition as part of the FGVC7 workshop at CVPR 2020, and is sponsored by Microsoft AI for Earth and Wildlife Insights. There is a GitHub page for the competition here. Please open an issue if you have questions or problems with the dataset.

You can find the iWildCam 2018 Competition here, and the iWildCam 2019 Competition here.

Data Overview

The WCS training set contains 217,959 images from 441 locations, and the WCS test set contains 62,894 images from 111 locations. These 552 locations are spread across the globe.

You may also choose to use supplemental training data from the iNaturalist 2017iNaturalist 2018, and iNaturalist 2019 competition datasets. As a courtesy, we have curated all the images from these datasets containing classes that might be in the test set and mapped them into the iWildCam categories. Note: these curated images come only from the iNaturalist 2017 and iNaturalist 2018 datasets because there are no common classes between the iNaturalist 2019 dataset and the WCS dataset. However, participants are still free to use the iNaturalist 2019 data.

This year we are providing Landsat-8 multispectral imagery for each camera location as supplementary data. In particular, each site is associated with a series of patches collected between 2013 and 2019. The patches are extracted from a "Tier 1" Landsat product, which consists only of data that meets certain geometric and radiometric quality standards. Consequently, the number of patches per site varies from 39 to 406 (median: 147). Each patch is 200x200x9 pixels, covering an area of 6km^2 at a resolution of 30 meters / pixel across 9 spectral bands. Note that all patches for a given site are registered, but are not centered exactly at the camera location to protect the integrity of the site.

The data can be downloaded from the competition GitHub page.

Camera Trap Animal Detection Model

We are also providing a general animal detection model which competitors are free to use as they see fit.

The model is a TensorFlow Faster-RCNN model with Inception-Resnet-v2 backbone and atrous convolution.

The model and sample code for running the detector over a folder of images can be found here.

We have run the detector over the three datasets, and provide the top 100 boxes and associated confidences along with the metadata for WCS.

See the competition GitHub page for further details.