Image Feature Extraction
Transformers
Safetensors
dinov2
Inference Endpoints
sthyland commited on
Commit
736d4be
1 Parent(s): d0face5

init the model card

Browse files
Files changed (1) hide show
  1. README.md +242 -5
README.md CHANGED
@@ -1,5 +1,242 @@
1
- ---
2
- license: other
3
- license_name: microsoft-research-license
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: transformers
4
+ ---
5
+
6
+ # Model card for RAD-DINO
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+ ## Model description
11
+
12
+ <!-- Provide a longer summary of what this model is. -->
13
+
14
+ RAD-DINO-MAIRA-2 is a vision transformer model trained to encode chest X-rays using the self-supervised learning method [DINOv2](https://openreview.net/forum?id=a68SUt6zFt). RAD-DINO-MAIRA-2 is a variant of [RAD-DINO](https://huggingface.co/microsoft/rad-dino), which is described in detail in [RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision (F. Pérez-García, H. Sharma, S. Bond-Taylor, et al., 2024)](https://arxiv.org/abs/2401.10815).
15
+
16
+ RAD-DINO-MAIRA-2 is the version of RAD-DINO used in [MAIRA-2: Grounded Radiology Report Generation (S. Bannur, K. Bouzid, et al., 2024)](https://arxiv.org/abs/2406.04449). Relative to [RAD-DINO](https://huggingface.co/microsoft/rad-dino), it was trained on more data.
17
+
18
+ - **Developed by:** Microsoft Health Futures
19
+ - **Model type:** Vision transformer
20
+ - **License:** MSRLA
21
+ - **Finetuned from model:** [`dinov2-base`](https://huggingface.co/facebook/dinov2-base)
22
+
23
+ ## Uses
24
+
25
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
26
+
27
+ RAD-DINO-MAIRA-2 is shared for research purposes only.
28
+ It is **not meant to be used for clinical practice**.
29
+
30
+ <!-- ### Downstream use -->
31
+
32
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
33
+
34
+ The model is a vision backbone that can be plugged to other models for downstream tasks.
35
+ Some potential uses are:
36
+
37
+ - Image classification, with a classifier trained on top of the `CLS` token
38
+ - Image segmentation, with a decoder trained using the patch tokens
39
+ - Clustering, using the image embeddings directly
40
+ - Image retrieval, using nearest neighbors of the CLS token
41
+ - Report generation, with a language model to decode text
42
+
43
+ Fine-tuning RAD-DINO-MAIRA-2 is typically not necessary to obtain good performance in downstream tasks.
44
+
45
+ <!-- ### Out-of-scope use -->
46
+
47
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
48
+
49
+ ## Biases, risks, and limitations
50
+
51
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
52
+
53
+ RAD-DINO-MAIRA-2 was trained with data from three countries, therefore it might be biased towards population in the training data.
54
+ Underlying biases of the training datasets may not be well characterized.
55
+
56
+ ## Getting started
57
+
58
+ Let us first write an auxiliary function to download a chest X-ray.
59
+
60
+ ```python
61
+ >>> import requests
62
+ >>> from PIL import Image
63
+ >>> def download_sample_image() -> Image.Image:
64
+ ... """Download chest X-ray with CC license."""
65
+ ... base_url = "https://upload.wikimedia.org/wikipedia/commons"
66
+ ... image_url = f"{base_url}/2/20/Chest_X-ray_in_influenza_and_Haemophilus_influenzae.jpg"
67
+ ... headers = {"User-Agent": "RAD-DINO"}
68
+ ... response = requests.get(image_url, headers=headers, stream=True)
69
+ ... return Image.open(response.raw)
70
+ ...
71
+ ```
72
+
73
+ Now let us download the model and encode an image.
74
+
75
+ ```python
76
+ >>> import torch
77
+ >>> from transformers import AutoModel
78
+ >>> from transformers import AutoImageProcessor
79
+ >>>
80
+ >>> # Download the model
81
+ >>> repo = "microsoft/rad-dino-maira-2"
82
+ >>> model = AutoModel.from_pretrained(repo)
83
+ >>>
84
+ >>> # The processor takes a PIL image, performs resizing, center-cropping, and
85
+ >>> # intensity normalization using stats from MIMIC-CXR, and returns a
86
+ >>> # dictionary with a PyTorch tensor ready for the encoder
87
+ >>> processor = AutoImageProcessor.from_pretrained(repo)
88
+ >>>
89
+ >>> # Download and preprocess a chest X-ray
90
+ >>> image = download_sample_image()
91
+ >>> image.size # (width, height)
92
+ (2765, 2505)
93
+ >>> inputs = processor(images=image, return_tensors="pt")
94
+ >>>
95
+ >>> # Encode the image!
96
+ >>> with torch.inference_mode():
97
+ >>> outputs = model(**inputs)
98
+ >>>
99
+ >>> # Look at the CLS embeddings
100
+ >>> cls_embeddings = outputs.pooler_output
101
+ >>> cls_embeddings.shape # (batch_size, num_channels)
102
+ torch.Size([1, 768])
103
+ ```
104
+
105
+ If we are interested in the feature maps, we can reshape the patch embeddings into a grid.
106
+ We will use [`einops`](https://einops.rocks/) (install with `pip install einops`) for this.
107
+
108
+ ```python
109
+ >>> def reshape_patch_embeddings(flat_tokens: torch.Tensor) -> torch.Tensor:
110
+ ... """Reshape flat list of patch tokens into a nice grid."""
111
+ ... from einops import rearrange
112
+ ... image_size = processor.crop_size["height"]
113
+ ... patch_size = model.config.patch_size
114
+ ... embeddings_size = image_size // patch_size
115
+ ... patches_grid = rearrange(flat_tokens, "b (h w) c -> b c h w", h=embeddings_size)
116
+ ... return patches_grid
117
+ ...
118
+ >>> flat_patch_embeddings = outputs.last_hidden_state[:, 1:] # first token is CLS
119
+ >>> reshaped_patch_embeddings = reshape_patch_embeddings(flat_patch_embeddings)
120
+ >>> reshaped_patch_embeddings.shape # (batch_size, num_channels, height, width)
121
+ torch.Size([1, 768, 37, 37])
122
+ ```
123
+
124
+ ## Training details
125
+
126
+ ### Training data
127
+
128
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
129
+
130
+ We used images from five public and one private deidentified chest X-ray datasets to train RAD-DINO-MAIRA-2.
131
+
132
+ | Dataset | Num. images |
133
+ | --------- | ----------: |
134
+ | [MIMIC-CXR](https://www.nature.com/articles/s41597-019-0322-0) | 368 960 |
135
+ | [CheXpert](https://ojs.aaai.org/index.php/AAAI/article/view/3834) | 223 648 |
136
+ | [NIH-CXR](https://openaccess.thecvf.com/content_cvpr_2017/html/Wang_ChestX-ray8_Hospital-Scale_Chest_CVPR_2017_paper.html) | 112 120 |
137
+ | [PadChest](https://www.sciencedirect.com/science/article/abs/pii/S1361841520301614) | 136 787 |
138
+ | [BRAX](https://www.nature.com/articles/s41597-022-01608-8) | 41 260 |
139
+ | USMix (Private) | 521 608 |
140
+ | **TOTAL** | 1 404 383 |
141
+
142
+ Images in the validation and test sets used to train [MAIRA-2](https://arxiv.org/abs/2406.04449) were excluded from the training set of RAD-DINO-MAIRA-2.
143
+
144
+ We used 8 nodes with 4 A100 GPUs each, and a batch size of 40 images per GPU.
145
+ We share the last checkpoint, trained for 105 000 steps.
146
+
147
+ ### Training procedure
148
+
149
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
150
+
151
+ We refer to the [manuscript](https://arxiv.org/abs/2401.10815) for a detailed description of the training procedure.
152
+
153
+ #### Preprocessing
154
+
155
+ All DICOM files were resized using B-spline interpolation so that their shorter size was 518, min-max scaled to [0, 255], and stored as PNG files.
156
+
157
+ #### Training hyperparameters
158
+
159
+ - **Training regime:** fp16 using PyTorch-FSDP mixed-precision.
160
+
161
+ <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
162
+
163
+ ## Evaluation
164
+
165
+ <!-- This section describes the evaluation protocols and provides the results. -->
166
+
167
+ Our evaluation is best described in the [manuscript](https://arxiv.org/abs/2401.10815).
168
+
169
+ <!-- ### Testing data, factors & metrics
170
+
171
+ #### Testing Data
172
+
173
+ [More Information Needed]
174
+
175
+ #### Factors
176
+
177
+ [More Information Needed]
178
+
179
+ #### Metrics
180
+
181
+ [More Information Needed]
182
+
183
+ ### Results
184
+
185
+ [More Information Needed]
186
+
187
+ #### Summary -->
188
+
189
+ ## Environmental impact
190
+
191
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
192
+
193
+ <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
194
+
195
+ <!-- Hardware type: A100 PCIe -->
196
+ <!-- Hours: 1d 17h = 41h -->
197
+ <!-- Cloud provider: Azure -->
198
+ <!-- Region: West US 2 -->
199
+
200
+ - **Hardware type:** NVIDIA A100 GPUs
201
+ - **Hours used:** 41 hours/GPU × 8 nodes × 4 GPUs/node = 1312 GPU-hours
202
+ - **Cloud provider:** Azure
203
+ - **Compute region:** West US 2
204
+ - **Carbon emitted:** 98.4 kg CO₂ eq.
205
+
206
+ ### Compute infrastructure
207
+
208
+ RAD-DINO-MAIRA-2 was trained on [Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning).
209
+
210
+ #### Hardware
211
+
212
+ We used 8 `Standard_NC96ads_A100_v4` nodes with four NVIDIA A100 (80 GB) GPUs each.
213
+
214
+ #### Software
215
+
216
+ We leveraged the code in [DINOv2](https://openreview.net/forum?id=a68SUt6zFt) for training.
217
+ We used [SimpleITK](https://simpleitk.org/) and [Pydicom](https://pydicom.github.io/) for processing of DICOM files.
218
+
219
+ ## Citation
220
+
221
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
222
+
223
+ **BibTeX:**
224
+
225
+ ```bibtex
226
+ @misc{perezgarcia2024raddino,
227
+ title={{RAD-DINO}: Exploring Scalable Medical Image Encoders Beyond Text Supervision},
228
+ author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay},
229
+ year={2024},
230
+ eprint={2401.10815},
231
+ archivePrefix={arXiv},
232
+ primaryClass={cs.CV}
233
+ }
234
+ ```
235
+
236
+ **APA:**
237
+
238
+ > Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D.C., Schwaighofer, A., Lungren, M.P., Wetscherek, M.T., Codella, N., Hyland, S.L., Alvarez-Valle, J., & Oktay, O. (2024). *RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision*. ArXiv, abs/2401.10815.
239
+
240
+ ## Model card contact
241
+
242
+ Fernando Pérez-García ([`[email protected]`](mailto:[email protected])).