A foundation model utilizing chest CT volumes and radiology reports for supervised-level zero-shot detection of abnormalities
Paper
•
2403.17834
•
Published
•
3
The task is to train one multimodal model for report generation and vision QA.
The dataset contains two subsets for abdomen and lung CT report generation and VQA.
FLARE-Task5-MLLM-3D/
├── README.md
├── train # training set
│ ├── CT-AMOS-1290 # source: https://era-ai-biomed.github.io/amos/
│ ├── CT-AMOS-Tr.json
│ ├── CT-RATE-2000 # source: https://huggingface.co/datasets/ibrahimhamamci/CT-RATE we select unique patients with disease and clinical information.
│ └── CT-RATE-Tr.json
├── validation/ # validation sets
│ ├── images
│ └── val.json
# To avoid potential data leakage, the testing set will not be released.
Other public datasets can be used but participants must submit an algorithm docker that only trained on the provided dataset. Each team can submit two dockers:
teamname.tar.gzteamname_publicdata.tar.gzParticipants can download the complete dataset using the following Python script:
from huggingface_hub import snapshot_download
local_dir = "./FLARE-Task5-MLLM-3D"
snapshot_download(
repo_id="FLARE-MedFM/FLARE-Task5-MLLM-3D",
repo_type="dataset",
local_dir=local_dir,
local_dir_use_symlinks=False,
resume_download=True,
)
We also provide segmentations generated by TotalSegmentator for the AMOS-MM dataset. These can be downloaded here.