Open Cities AI Challenge: Segmenting Buildings for Disaster Resilience

Can you map building footprints from drone imagery? This semantic segmentation challenge leverages computer vision and data from OpenStreetMap to support disaster risk management efforts in cities across Africa. #disasters

$15,000 in prizes
Completed mar 2020
1,099 joined

This challenge created public goods that advance our state of practice in applying ML for understanding risk in urban Africa, including new ML performance benchmarks and in-depth explorations of how we responsibly create and deploy AI systems for disaster risk management.

Why

As urban populations grow, more people are exposed to the benefits and hazards of city life. To manage the risk of natural disasters in this dynamic built environment, buildings need to be mapped frequently and in enough detail to help communities prepare and respond.

The Solution

ML algorithms are becoming critical to scaling these mapping efforts by learning to use aerial imagery to automatically create building footprints. To power these models, this competition featured high-resolution drone imagery from 12 African cities and regions covering more than 700,000 buildings. This imagery was paired with building footprints annotated with the help of local OpenStreetMap communities.

Throughout the challenge, participants competed to build computer vision models that could most accurately map buildings on the ground. In parallel, the novel Responsible AI track gave participants an opportunity to engage with the ethical implications of applying ML in disaster risk contexts.

The Results

The top model achieved an impressive 0.86 Jaccard score (i.e. intersection over union) on the private test set! That translates to segmenting buildings with 92% precision (ground truth in predictions) and 93% recall (predictions in ground truth)!

sample building footprints
Sample outputs reflecting predictions from the winning segementation model (red) compared with ground truth labels (black). Left image from Lusaka, Zambia (chip Jaccard score = 0.93), right image from Zanzibar, Tanzania (chip Jaccard score = 0.88).

This result represents a dramatic improvement over what relatively low-effort models would have produced before the challenge (0.5915 and 0.6235 Jaccard scores). What's more, the challenge datasets had enough diversity in locations and sensors to make the open-source winning solutions useful for a range of urban mapping projects across Africa.


RESULTS ANNOUNCEMENT + MEET THE WINNERS

WINNING MODELS ON GITHUB

WINNING SUBMISSIONS TO THE RESPONSIBLE AI TRACK

CHALLENGE DATASET