SNOMED CT Entity Linking Challenge

Link spans of text in clinical notes to concepts in the SNOMED CT clinical terminology. #health

$25,000 in prizes
mar 2024
553 joined

A synthetic example illustrating how a medical note is annotated with concepts from SNOMED CT.

A synthetic example of medical text and labeled concepts. The concepts are highlighted in green, and the indicated concept IDs, names, and categories are shown.

Why

Much of the world's healthcare data is stored in free-text documents, usually clinical notes taken by doctors. This unstructured data can be challenging to analyze and extract meaningful insights from.

However, by applying a standardized terminology like SNOMED CT, healthcare organizations can convert this free-text data into a structured format that can be readily analyzed by computers, in turn stimulating the development of new medicines, treatment pathways, and better patient outcomes.

The Solution

One way to analyze clinical notes is to identify and label the portions of each note that correspond to specific medical concepts, a task known as entity linking. Entity linking medical notes is no small feat; notes are often rife with abbreviations and assumed knowledge, and the knowledge base itself can include hundreds of thousands of medical concepts.

The objective of the SNOMED CT Entity Linking Challenge was to link spans of text in clinical notes with specific topics in the SNOMED CT clinical terminology. Participants built systems based on de-identified real-world doctor's notes that were annotated with SNOMED CT concepts by medically trained professionals. This is the largest publicly available dataset of labelled clinical notes, and the challenge participants were among the first to use it!

The Results

At the start of the competition, DrivenData released a benchmark solution developed by our partners at Veratai. The competition metric was based on the character intersection-over-union (IoU) of predicted and actual spans, macro-averaged by class to weight all classes equally. The benchmark achieved a score of 0.1794 by that metric; by the close of the competition, 25 participants had surpassed the benchmark score!

The winning teams drew on a diverse set of approaches to data, algorithms, and everything in between. Some approaches included curating a large set of open-source medical synonyms and abbreviations (nearly one million!), incorporation of simple linguistic rules, low-rank approximation (LoRA) fine-tuning state-of-the-art large language models (LLMs), prompt engineering, and retrieval augmented generation (RAG) to enrich the LLM prompt context with relevant documents. Overall, the results demonstrate significant progress in addressing this important issue.

Overall results of benchmark and winning solutions in terms of the competition metric, class macro-averaged character intersection-over-union (IoU).

Comparison of the benchmark and three winning solutions in terms of the competition metric, class macro-averaged character intersection-over-union (IoU).

You can find the winners' code and write-ups in the winners' repository.