U.S. PETs Prize Challenge: Phase 3 (Red Teams)

Help unlock the potential of privacy-enhancing technologies (PETs) to combat global societal challenges. Test the privacy guarantees of blue teams' federated learning solutions. #privacy

$120,000 in prizes
mar 2023
14 joined

Problem Description


The objective of the challenge is the development of privacy-preserving federated learning solutions that are capable of training effective models while providing demonstrable levels of privacy against a broad range of privacy threats. The challenge organizers are interested in efficient and usable federated learning solutions that provide end-to-end privacy and security protections while harnessing the potential of AI for overcoming significant global challenges. Testing and verifying the claims of such solutions is a key part of the process.

In Phase 3 of the challenge, you will develop and conduct privacy attacks against the solutions of Phase 2's blue team finalists. The results of your attacks will inform the final rankings of the blue teams. Additionally, you will be considered for red team prizes based on the quality of your attacks and reported results.

Phase 3 Structure


After the close of registration, Phase 3 will proceed through two stages:

  • Preparation Period—you will receive blue team concept papers and other materials to begin planning privacy attacks.
  • Attack Period—you will be assigned blue team finalist solutions and conduct attacks on them.

Preparation Period

In this stage, you will receive materials to help you prepare and plan privacy attacks. You will be provided the following:

  • Concept papers from all blue teams participating in Phase 2
  • Development datasets for both Track A and Track B
  • Documentation of specifications of the blue teams' Phase 2 submissions

During this period, you will not yet know which blue team solutions will be assigned to you in the following Attack Period. Therefore, you should be prepared to be assigned any of the blue team solutions in the following Attack Period.

Attack Period

At the start of the Attack Period, you will be assigned a set of blue team finalists' solutions to evaluate, with a total of no more than five.

You will be provided the following to download:

  • Container images and image specifications (Dockerfile) used for blue teams' Phase 2 evaluation
  • Assigned blue teams' Phase 2 submissions, including:
    • Source code for model training and inference
    • All trained models
    • All client–aggregator communication from the simulated federated training, captured by the code execution harness
    • Predictions from conducting inference on test datasets using trained models
  • Evaluation datasets for Track A and/or Track B as relevant to the blue team solutions that they were assigned, with the same partitioning used in blue teams' Phase 2 evaluation

Guidance or datasets for baseline privacy attacks may be provided. More details will be provided closer to the Attack Period.

Blue Team Phase 2 Specifications and Documentation


Blue team solutions will generally be demonstrated on one of two data tracks—financial crime prevention or pandemic forecasting—each with their own dataset and machine learning task. Some solutions may be "generalized" solutions that apply to both. All relevant documentation for blue teams is available to you now and can be found on the Phase 2 challenge websites for each track.

All Phase 2 blue team solutions will be trained and tested in a containerized execution runtime. The source code and container image specification for this runtime can be found in the challenge runtime repository.

Overview of what teams submit


You must make a complete submission for each assigned blue team solution. A complete submission consists of:

  • Technical report: A report of no more than 4 pages, describing the privacy claims tested by the attack(s), the attack(s) themselves, and experimental results showing their effectiveness
  • Appendix (Optional): An appendix of no more than 8 pages, to provide additional details and proofs. Note that reviewers will not be required to read the appendix when evaluating your submissions.
  • Code: The implementation used to generate the experimental results in the report
  • Code guide: A code guide of no more than 1 page, describing how the code implements the attack(s) described in the report

Submission Requirements


All submitted documents must adhere to the following style guidelines:

  • Use PDF file format.
  • Use 11-point font for the main text.
  • Set the page size to 8.5”x 11” with margins of 1 inch all around.
  • Lines should be at a minimum single-spaced.

Technical Report

Successful reports will include the following sections. Reports must be no more than 4 pages.

  1. Title
    The title of your submission, matching the abstract.
  2. Abstract
    A brief description of your attack and its effectiveness.
  3. Attack Overview and Threat Model
    This section should describe:
    • Privacy claims made by the target that are broken or tested by your attack
    • Assumptions made about the target, deployment situation, or threat model that enable the attack or impact its effectiveness
    • The realism of the attack, and how it could be applied in a practical deployment
    • Potential mitigations for the attack
  4. Technical Approach & Innovation
    This section should clearly describe the attack itself, including the design and technical details of its construction, and describe any innovative aspects of your approach.
  5. Effectiveness & Generalizability
    This section should describe the effectiveness of your attack on the target solution, provide empirical evidence for your effectiveness claims, and discuss how the attack might generalize to other solutions.
  6. Team Introduction
    An introduction to yourself and your team members (if applicable) that briefly details background and expertise. Optionally, you may explain your interest in the problem.
  7. References
    A reference section.

Appendix (Optional)

If you have supporting details such as proofs, you may include them in an appendix. Note that reviewers will not be required to read the appendix. Appendices must be no more than 8 pages.

Implementation Code

You must provide full, reproducible, and documented source code for your attacks. You will need to satisfy the following requirements:

  • Your code should take the form of one or more scripts or notebooks that, when run, output the results of your privacy attacks per your technical report.
  • Clearly documented instructions for setting up dependencies, including
    • Specification for your environment, such as a Dockerfile, a requirements.txt file, or a conda environment.yml file.
    • Instructions for where to place data dependencies relative to your submission's directory structure
  • What commands to run to produce the results from your technical paper.

Please keep in mind that judges will need to review your code as part of the evaluation process. There is no required specification for the structure of code submission, but you are encouraged to use a logical project structure such as Cookiecutter Data Science.

Code Guide

You will be required to create a code guide in the style of a README that documents your code. The code guide should explain all of the components of your code and how they correspond to the conceptual elements of your solution. An effective code guide will provide a mapping between the key parts of your technical paper and the relevant parts of your source code. Please keep in mind that reviewers will need to be able to read and understand your code, so follow code readability best practices as much as you are able to when developing your solution.

Evaluation Criteria


Each submission (corresponding to one assigned blue team solution) will be evaluated by judges according to the following criteria. All scores for a team will be averaged to create a final score used to determine overall red team rankings.

Topic Factors Weighting (/100)
Effectiveness How completely does the attack break or test the privacy claims made by the target solution? (e.g. what portion of user data is revealed, and how accurately is it reconstructed)? 40
Applicability / Threat Model How realistic is the attack? How hard would it be to apply in a practical deployment? 30
Generalizability Is the attack specific to the target solution, or does it generalize to other solutions? 20
Innovation How significantly does the attack improve on the state-of-the-art? 10


Good luck


Good luck and enjoy this problem! If you have any questions, you can always ask the community by visiting the DrivenData user forum or the cross-U.S.–U.K. public Slack channel. You can request access to the Slack channel here.