File size: 4,144 Bytes
74438e1
af685c3
74438e1
 
 
 
 
 
 
 
 
 
 
 
 
9f2c34a
 
 
 
 
 
 
 
 
 
 
 
 
 
74438e1
9f2c34a
 
 
 
 
 
 
c4cca10
 
74438e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
base_model: 1aurent/vit_base_patch16_224.owkin_pancancer
tags:
- image-classification
- timm
- owkin
- biology
- cancer
- colon
library_name: timm
datasets:
- 1aurent/LC25000
metrics:
- accuracy
pipeline_tag: image-classification
model-index:
- name: owkin_pancancer_ft_lc25000_colon
  results:
  - task:
      type: image-classification
      name: Image Classification
    dataset:
      name: 1aurent/LC25000
      type: image-classification
    metrics:
    - type: accuracy
      value: 0.999
      name: accuracy
      verified: false
widget:
- src: >-
    https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/24999/image/image.jpg
  example_title: benign
- src: >-
    https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/17501/image/image.jpg
  example_title: adenocarcinomas
license: other
license_name: owkin-non-commercial
license_link: https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt
---

# Model card for vit_base_patch16_224.owkin_pancancer_ft_lc25000_colon

A Vision Transformer (ViT) image classification model. \
Trained by Owkin on 40M pan-cancer histology tiles from TCGA. \
Fine-tuned on LC25000's colon subset.

## Model Details

- **Model Type:** Image classification / feature backbone
- **Model Stats:**
  - Params (M): 85.8
  - Image size: 224 x 224 x 3
- **Papers:**
  - Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling: https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2
- **Pretrain Dataset:** TGCA: https://portal.gdc.cancer.gov/
- **Dataset:** LC25000: https://huggingface.co/datasets/1aurent/LC25000
- **Original:** https://github.com/owkin/HistoSSLscaling/
- **License:** https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt

## Model Usage

### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm

# get example histology image
img = Image.open(
  urlopen(
    "https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/24999/image/image.jpg"
  )
)

# load model from the hub
model = timm.create_model(
  model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_colon",
  pretrained=True,
).eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1
```

### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm

# get example histology image
img = Image.open(
  urlopen(
    "https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/24999/image/image.jpg"
  )
)

# load model from the hub
model = timm.create_model(
  model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_colon",
  pretrained=True,
  num_classes=0,
).eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # output is (batch_size, num_features) shaped tensor
```

## Citation
```bibtex
@article {Filiot2023.07.21.23292757,
  author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
  title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
  elocation-id = {2023.07.21.23292757},
  year = {2023},
  doi = {10.1101/2023.07.21.23292757},
  publisher = {Cold Spring Harbor Laboratory Press},
  URL = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
  eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
  journal = {medRxiv}
}
```