This leaderboard only displays AUPRC scores and may not reflect final rankings based on overall challenge evaluation criteria. See the "Problem Description" page for full evaluation criteria.
Loading...
AP $= \sum_n (R_n - R_{n-1}) P_n$
Average Precision for Partitioning Scenario N1: Average Precision (AP), also equivalent to area under the precision–recall curve (AUPRC or PRAUC) with no interpolation, ranges from 0 to 1. For more information, see sklearn's documentation. The goal is to maximize AP. In the case of multilabel classification, this metric will be calculated as a macro-average, with the final metric being an unweighted mean of AP values calculated for each label separately.
AP $= \sum_n (R_n - R_{n-1}) P_n$
Average Precision for Partitioning Scenario N2: Average Precision (AP), also equivalent to area under the precision–recall curve (AUPRC or PRAUC) with no interpolation, ranges from 0 to 1. For more information, see sklearn's documentation. The goal is to maximize AP. In the case of multilabel classification, this metric will be calculated as a macro-average, with the final metric being an unweighted mean of AP values calculated for each label separately.
AP $= \sum_n (R_n - R_{n-1}) P_n$
Average Precision for Partitioning Scenario N3: Average Precision (AP), also equivalent to area under the precision–recall curve (AUPRC or PRAUC) with no interpolation, ranges from 0 to 1. For more information, see sklearn's documentation. The goal is to maximize AP. In the case of multilabel classification, this metric will be calculated as a macro-average, with the final metric being an unweighted mean of AP values calculated for each label separately.
All times are in Coordinated Universal Time (UTC), also known as Greenwich Mean Time (GMT).