PREPARE Challenge - Phase 2: Report Arena

Advance algorithms and analytic approaches for early prediction of Alzheimer's disease and related dementias, with an emphasis on explainability of predictions. [Report Arena] #health

$250,000 in prizes
Completed jan 2025
69 joined

Explainability bonus track

The purpose of the Explainability Bonus Track is to demonstrate effective methods for sharing predictions and relevant insights with preclinical patients, care providers, and clinicians.

This track provides an opportunity for participants to demonstrate how their models can support clinical decision-making through clear communication of individual predictions (local explainability). While the main model report focuses on explaining how the model makes predictions generally, this bonus track focuses on explaining why a specific prediction was made. The intended audience of explainability bonus submissions is a patient or other people involved in their care, like a physician, caretaker, or family member, rather than a researcher. Given that this challenge focuses on advancing early prediction of AD/ADRD, assume that the fictional patient in this case is preclinical (i.e., cognitively healthy).

Evaluation

Clinical Usability (40%) How well would the described approach for presenting and explaining model predictions work in a clinical context? Do the explainers provide information that would enable a clinician to decide how to ethically use each individual prediction? Are model limitations acknowledged and explained?

Accessibility and Clarity (30%) How clearly are technical and clinical concepts explained, both in text and visuals? Is sufficient description and context provided? How effectively are the findings communicated to the intended audience (preclinical patients, care givers, clinicians)?

Rigor (30%) Are quantitative explainability metrics, statistics, and visuals correct and based on sound methodology? Does the report describe key concepts accurately? Would the explainability techniques used generalize beyond the included examples?

Note that explainability bonus submissions will not be evaluated based on how accurately the model performs. Instead, evaluation emphasizes how well model behavior is explained, including strengths and limitations. A good submission will help users understand both when they should and shouldn't trust the model's predictions.

Explainability submission content and format

The purpose of the explainability submission is to demonstrate effective methods for sharing predictions and relevant insights with preclinical patients, care providers, and clinicians.

Required content:

  • Three example one-page prediction explainers
  • Report documenting the methodology behind the model and explainer

Note: You are not allowed to change the model that you submitted to a model arena. See the home page for details.

Technical requirements

  • Length: Maximum 6 pages total (3 one-page explainers + a three-page methodology report), including figures and tables but not references
  • Page Size: 8.5x11" with 1" margins
  • Font: Minimum 11pt for main text, 10pt font for figures and tables

Your submission should be a single ZIP archive containing the following four PDF files:

  • Three prediction explainers (where uid is the unique identifier for the focal patient):
    • explainer-uid.pdf
    • explainer-uid.pdf
    • explainer-uid.pdf
  • A report:
    • report.pdf

Submission components and structure

Prediction Explainers (3x at 1 page each)

Choose three patients from the data, and create a one-page explainer for each patient demonstrating how that patient's predicted diagnosis could be communicated. Make sure to choose three examples that demonstrate different types of model behaviors and showcase the strengths of your chosen explainability approach.

  • For the Acoustic Track, you must include one explainer for each ground-truth outcome (Control, MCI, and ADRD).
  • For the Social Determinants Track, you must include one explainer for each of these case types:

    • A typically aging case (someone maintaining stable cognitive function over time)
    • A cognitive decline case (someone showing progression toward or into cognitive decline, based on the difference between ground-truth 2016 and 2021 composite score)
    • A third case of your choice that highlights the strengths of your approach

In each explainer, share the predicted value and other relevant context or information. This might include:

  • uncertainty of the prediction,
  • relevant model performance metrics,
  • quantitative explanations for the prediction, and/or
  • contributing patient features.

The goal is to enable clinicians, care providers, and patients to correctly interpret and decide how to responsibly use that prediction. Visualizations are strongly encouraged.

Methodology Report (3 pages)

Describe the technical approach behind your explainers, including how explainability metrics and visualizations were generated and the sources for any factual claims or referenced research.

1. Abstract
  • Brief overview of your technical approach to generate the explainers
  • Brief summary of how the explainers are targeted to an audience of preclinical patients, clinicians, and care providers.
2. Technical Approach
  • Summary of the modeling approach and description of model performance
  • Methodology for generating explainability metrics
  • Justification for choice of metrics and visuals
  • Discussion of limitations and uncertainty
3. References
  • Citations for any factual claims and or research referenced in the technical approach or the explainers

Additional tips and resources

Strong explainability submissions will:

  • Incorporate visualizations, charts, and tables.
  • Prioritize clarity in your writing and visualizations.
  • Consider the context and details that the intended audiences might need to understand how a prediction was reached, and decide whether and how it should influence patient care.
  • Consider and reduce the risk of potential harms of sharing model predictions.
  • Offer risk communication strategies for different cultural/educational backgrounds.
  • Help users understand both when they should and shouldn't trust the model's predictions.

To learn more about explainability and communication in clinical contexts, see: