Pri-matrix Factorization

Data scientists from more than 90 countries around the world drew on 300,000 video clips in a competition to build the best machine learning models for identifying wildlife from camera trap footage. The results are powerful and – equally important … Competition hosted by Max Planck Institute for Evolutionary Anthropology. #climate

$20,000 awarded
dec 2017
320 joined


Camera traps are a critical part of wildlife monitoring but analyzing those videos is a big bottleneck. Developing an automated procedure is of the utmost importance to conservation research.

— Dr. Christophe Boesch, Wild Chimpanzee Foundation / Max Planck Institute for Evolutionary Anthropology


While camera traps have become powerful non-invasive tools in research and conservation efforts, they can't yet autonomously label the species they observe. It takes a lot of valuable time to determine whether there are any animals present (or just passing winds), and if there are, which ones.

The Solution

Through the Chimp&See Zooniverse project, a global community crowd-labeled research videos showing wildlife or blank frames. Then, the DrivenData community used cutting edge computer vision techniques to turn those labels into algorithms for automated species detection. The top 3 submissions best able to predict the presence and type of wildlife across new videos won this challenge.

The Results

The winning algorithm from the challenge achieved 96% accuracy identifying presence of wildlife, and 99% average accuracy identifying species. What’s more, in an effort to make these advances more widely available to ecologists and conservationists, the top algorithm was simplified and adapted for use on new videos in an open source command-line tool. An overview of “Project Zamba” with links to get started is available at