Where's Whale-do?

Help the Bureau of Ocean Energy Management (BOEM), NOAA Fisheries, and Wild Me accurately identify endangered Cook Inlet beluga whales from photographic imagery. Scalable photo-identification of individuals is critical to population assessment, management, and protection for these endangered whales. #climate

$35,000 in prizes
jun 2022
442 joined

"We are excited to start using the winning models and anticipate this will speed up our work flow substantially. We will be able to get the results of our annual surveys out to the public and managers much more rapidly so that conservation efforts can be based on the most recent information."

— Paul Wade, NOAA Fisheries Research Biologist and lead for NOAA’s research on Cook Inlet Belugas

Why

Cook Inlet belugas are an endangered population of beluga whales at risk for extinction after years of hunting, and which continue to face threats related to vessel traffic in the busy Cook Inlet waterway.

In order to more closely monitor their health and track individual whales, the NOAA Alaska Fishery Science Center conducts an annual photo-identification survey of Cook Inlet belugas. But processing and analyzing new whale images is largely manual, consuming significant time and resources. New and improved methods are needed to help automate this process and accurately identify matches of the same individual whale across different survey images.

The Solution

In recent years, conservationists have been exploring the use of machine learning to help in their work, often applying computer vision techniques to large datasets of images. In many cases, these techniques can be less invasive, expensive or laborious than traditional research techniques such as physical tagging an animal. One task that they can help with is animal re-identification, in which a model learns to identify new images of the same individual, which wildlife researchers can then use to make estimates about population size and health.

The Where's Whale-do challenge invited participants to help wildlife researchers by developing models to accurately identify individual Cook Inlet beluga whales from photographic images. The challenge dataset consisted of over 9,000 images of beluga whales taken from surveys between 2017 and 2019. Participants submitted inference code and model assets to a code execution environment in which predictions were generated on a hidden set of 10 scenarios, testing their skill in identifying matches across a range of query-database configurations.

The Results

Four finalists topped the final leaderboard, with scores of close to 0.5 mAP (mean average precision). Finalists achieved perfect scores (1.0 mAP) on about 25% of queries, which often meant picking up to 20 correct images from a database of >1,000 images. Models with these capabilities have the potential to significantly improve the individual re-identification process for wildlife researchers.

Finalists were also invited to make a submission for the Explainability Bonus Round, in which they were asked to visualize which regions of an image were being used by their models to identify an individual whale. Bonus Round submissions, as seen in the example below, pointed to the dorsal ridge and surrounding areas, along with scars and other marks, as the most important features being used by the winning models.

alt-text

See the results announcement for more information on the winning approaches. All of the prize-winning solutions from this competition, along with the dataset assembled for the challenge, are linked below and available for anyone to continue to use and learn from.


RESULTS ANNOUNCEMENT + MEET THE WINNERS

WINNING MODELS ON GITHUB

CHALLENGE DATASET ON LILA BC