Water Supply Forecast Rodeo: Forecast Stage

Water managers in the Western U.S. rely on accurate water supply forecasts to better operate facilities and mitigate drought. Help the Bureau of Reclamation improve seasonal water supply estimates in this probabilistic forecasting challenge! [Forecast Stage] #climate

$50,000 in prizes
Completed jul 2024
47 joined

Final Prize Stage Preview

This page outlines the prerequisites and necessary submission components to be considered for Overall Prizes and Explainability Bonus Prizes in the Water Supply Forecast Rodeo. For an overview of the prizes, please refer to the Prize Breakdown on the home page.


Table showing the requirements and deliverables for each prize category in the Water Supply Forecast Rodeo.
Table presenting the requirements and deliverables for each prize category in the Water Supply Forecast Rodeo.

Overall prizes

The main challenge prizes will be awarded based on an overall evaluation that includes both quantitative forecast skill (forecast and cross-validation results) and qualitative evaluation of methodology.

Eligibility prerequisites

  • Successfully submitted code in both Hindcast and Forecast Stages.

Submissions

  • Conduct leave-one-out cross-validation (LOOCV) over the 20-year hindcast period.
  • Produce a Final Model Report by expanding upon the Hindcast Model Report. The updated report will incorporate additional topics such as results from LOOCV and discussion of the model's performance under different conditions. Detailed guidelines on the specific requirements for the Final Model Report will be communicated to solvers in February 2024.

Evaluation criteria

Forecast Skill (Hindcast) (30%)
Solutions will be evaluated based on cross-validation results over the 20-year hindcast period.
Forecast Skill (Forecast) (10%)
Solutions will be evaluated based on their predictions' quantile score from the Forecast Stage evaluation.
Rigor (20%)
To what extent is the solution methodology based on a sound physical and/or statistical foundation? Judges will consider how methodological decisions support or limit different aspects of rigor, such as avoiding overfitting, avoiding data leakage, assessing and mitigating biases, and potential for the model to produce valid predictions in an applied context. Judges will also consider whether any aspects of the methodology are physically implausible.
Innovation (10%)
To what extent does the solution use datasets or modeling techniques that advance the state-of-the-art in water supply forecasting? Judges will consider innovation in any aspect of the technical approach, including but not limited to the data sources used, feature engineering, algorithm and architecture selection, or the approach to training and evaluation.
Generalizability (10%)
How well does the solution generalize to the varied sites and conditions tested in the challenge? Judges will consider reported information and hypotheses about the model’s performance under different geographic, environmental, and temporal conditions.
Efficiency & Scalability (10%)
How computationally efficient is the solution, and how well could it scale to an increased number of sites? Judges will consider all aspects of efficiency such as the reported total test runtime (including data processing), training resource costs (e.g., hardware, memory usage), and any reported potential for efficiency improvements or optimizations.
Clarity (10%)
How clearly are model mechanics exposed, communicated, and visualized in the report? Judges will consider how well organized and presented the report is.

Explainability and communication bonus prizes

Operational water managers can make better decisions from forecasts that provide additional information that supports and explains the forecast. Supporting information that explains or communicates forecasted conditions can include, but is not limited to, graphical, tabular, or narrative information explaining which inputs, relationships, or processes most strongly impacted a forecast. Questions that organizers are interested in include:

  • Which predictors or relationships between predictors most strongly influenced each forecast?
  • Which predictors or relationships between predictors most strongly influence the uncertainty bounds (i.e., 0.10 and 0.90 quantiles) for a given forecast?
  • What change in predictors/relationship between predictors most strongly influenced a change between two successive forecasts?

Prerequisites

  • Meet all prerequisites and submit all deliverables for the Overall Prize.

Submissions

  • Submit modified solution code that produces explainability and/or communication outputs.
  • Submit a technical write-up that explains the technical methodology/rationale for the outputs. More details about the write-up requirements and evaluation criteria to come.