On Cloud N: Cloud Cover Detection Challenge

Clouds obscure important ground-level features in satellite images, complicating their use in downstream applications. Build algorithms for cloud cover detection using a new cloud dataset and Microsoft's Planetary Computer! #science

$20,000 in prizes
feb 2022
847 joined

To obtain adequate analytical results from multispectral satellite imagery, it is essential to precisely detect clouds and mask them out from any Earth surface analysis, such as land cover classification. The models developed in this competition result in higher quality input data for those analytics.

— Hamed Alemohammad, Executive Director & Chief Data Scientist, Radiant Earth Foundation

Why

Satellite imagery is critical for a wide variety of applications from disaster management and recovery, to agriculture, to military intelligence. Sentinel-2 multispectral imagery in particular has been used in applications like tracking erupting volcanos, mapping deforestation, and monitoring wildfires.

A major obstacle for all of these use cases is the presence of clouds, which introduce noise and inaccuracy in image-based models. As a result, clouds usually have to be identified and removed to most effectively use these satellite data sources.

The Solution

The goal of this challenge was to most accurately detect cloud cover in multispectral satellite imagery from the Sentinel-2 mission. Algorithms submitted by participants were run on test imagery to produce cloud masks, which were compared against human cloud annotations. The availability of labeled data has been a major obstacle for cloud detection efforts, and this challenge featured a unique set of human-verified labels spanning imagery and cloud conditions across three continents.

The Results

Over the course of the competition, participants tested thousands of solutions and were able to significantly advance methods for cloud detection. The winning approach achieved a Jaccard score (intersection over union) of ~90%, detecting 91% of all cloudy pixels (recall) while keeping 94% of the pixels it labeled as clouds correct (precision). This represents a dramatic improvement over the existing thresholding methods provided with Sentinel-2 (65%) and the PyTorch Lightning benchmark (82%).

For example, the top models made gains on two key obstacles identified in the field: distinguishing clouds from other bright objects and detecting thin clouds.

Clouds vs. bright objects: Example chip from Buenos Aires, Argentina with a few very brightly illuminated plots of land (left). 1st place correctly selects only a small cloud in the top left corner (center).

Thin clouds: Example chip from Kikwit, Democratic Republic of the Congo that is completely covered by a thin layer of clouds (left). 1st place detects the thin layer and correctly predicts full cloud coverage (center).

See the results announcement for more information on the winning approaches. All of the prize-winning solutions from this competition, along with the dataset assembled for the challenge, are linked below and made available on for anyone to use and learn from.


RESULTS ANNOUNCEMENT + MEET THE WINNERS

WINNING MODELS ON GITHUB

SENTINEL-2 CLOUD COVER SEGMENTATION DATASET