DT4H_XLM-R_stl_multilingual_disease

Model Description

This multilingual clinical Named Entity Recognition (NER) model is designed to identify disease mentions in biomedical and clinical text. It is based on xlm-roberta-base and fine-tuned on translated variants of the clinical NER datasets DisTEMIST and CardioCCC, which consist of clinical case reports with manually annotated disease mentions, following a single-task learning (STL) approach and using the BIO tagging scheme for sequence labeling.

  • Architecture: Single-task learning (STL)
  • Training setup: Multilingual, Monolabel (DISEASE)
  • Supported languages:
    • Spanish (es)
    • Italian (it)
    • Romanian (ro)
    • English (en)
    • Dutch (nl)
    • Swedish (sv)
    • Czech (cs)
  • Base model: xlm-roberta-base
  • Task: Token classification (NER)
  • Label scheme: BIO

Training Data

The model is trained on multilingual clinical NER data combining DisTEMIST and CardioCCC across the supported languages.

The data is part of the MultiClinNER subtask of the MultiClinAI shared task, an initiative as part of the DataTools4Heart (DT4H) project, which provides translated and annotation-projected clinical corpora.

Training and test splits correspond to the MultiClinNER task at the 11th SMM4H-HeaRD Workshop (ACL 2026).

How to use

You can load the model using Hugging Face Transformers:

from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline

model_name = "judithrosell/DT4H_XLM-R_stl_multilingual_disease"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)

ner_pipeline = pipeline("token-classification", model=model, tokenizer=tokenizer)

text = "El paciente presenta insuficiencia cardíaca."
predictions = ner_pipeline(text)
print(predictions)

Note: We recommend pre-tokenizing the input text into words, as this matches the training setup. Providing raw text directly may lead to slightly degraded performance.

The model outputs token-level predictions. For evaluation or submission, these predictions should be converted into character-level spans.

To facilitate this process, we provide an inference script in a GitHub repository that:

  • Loads the model
  • Processes .txt files from an input directory
  • Extracts disease mentions
  • Exports predictions as a TSV file in the format required for the MultiClinAI evaluation library:
filename                              label      start_span    end_span    text
MultiClinNER-en-test-disease-00019    DISEASE    154           165         myocarditis

Limitations and bias

At the time of submission, no formal bias or fairness evaluation has been conducted. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.

Evaluation

Evaluation was conducted using strict (exact match) and character-level metrics on the MultiClinNER test set.

Language Strict P Strict R Strict F1 Char P Char R Char F1
es 0.6605 0.6681 0.6642 0.7900 0.7963 0.7931
it 0.6873 0.6141 0.6486 0.8138 0.7275 0.7682
ro 0.7139 0.6886 0.7011 0.8225 0.7955 0.8088
en 0.7143 0.6653 0.6890 0.8356 0.7778 0.8057
nl 0.6788 0.6305 0.6538 0.7866 0.7294 0.7569
sv 0.6839 0.6498 0.6664 0.7950 0.7554 0.7747
cz 0.6747 0.6258 0.6494 0.7937 0.7358 0.7636
Average 0.6675 0.7816

Additional information

Authors

NLP4BIA team at the Barcelona Supercomputing Center (nlp4bia@bsc.es).

Contact information

judith.rosell [at] bsc.es

Funding

This model is part of the DataTools4Heart project, funded by the European Union’s Horizon Europe Framework Under Grant Agreement No. 101057849.

Downloads last month
55
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BSC-NLP4BIA/DT4H_XLM-R_stl_multilingual_disease

Finetuned
(3930)
this model

Collection including BSC-NLP4BIA/DT4H_XLM-R_stl_multilingual_disease