Leaderboard
NOTE: This leaderboard ranks teams by model AUPRC score and does not reflect the final rankings based on overall evaluation criteria. Please see the winners announcement for the final winner rankings.
AP $= \sum_n (R_n - R_{n-1}) P_n$
Average Precision for Partitioning Scenario N1: Average Precision (AP), also equivalent to area under the precision–recall curve (AUPRC or PRAUC) with no interpolation, ranges from 0 to 1. For more information, see sklearn's documentation. The goal is to maximize AP. In the case of multilabel classification, this metric will be calculated as a macro-average, with the final metric being an unweighted mean of AP values calculated for each label separately.
AP $= \sum_n (R_n - R_{n-1}) P_n$
Average Precision for Partitioning Scenario N2: Average Precision (AP), also equivalent to area under the precision–recall curve (AUPRC or PRAUC) with no interpolation, ranges from 0 to 1. For more information, see sklearn's documentation. The goal is to maximize AP. In the case of multilabel classification, this metric will be calculated as a macro-average, with the final metric being an unweighted mean of AP values calculated for each label separately.
AP $= \sum_n (R_n - R_{n-1}) P_n$
Average Precision for Partitioning Scenario N3: Average Precision (AP), also equivalent to area under the precision–recall curve (AUPRC or PRAUC) with no interpolation, ranges from 0 to 1. For more information, see sklearn's documentation. The goal is to maximize AP. In the case of multilabel classification, this metric will be calculated as a macro-average, with the final metric being an unweighted mean of AP values calculated for each label separately.
All times are in Coordinated Universal Time (UTC), also known as Greenwich Mean Time (GMT).