Add tags
Browse files
README.md
CHANGED
@@ -1,16 +1,14 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
datasets:
|
4 |
-
- cifar10
|
5 |
-
- cifar100
|
6 |
- imagenet
|
7 |
-
- imagenet-21k
|
8 |
-
- oxford-iiit-pets
|
9 |
-
- oxford-flowers-102
|
10 |
- vtab
|
11 |
---
|
12 |
|
13 |
-
# Data-efficient Image Transformer tiny model
|
14 |
|
15 |
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
|
16 |
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- image-classification
|
5 |
+
- timm
|
6 |
datasets:
|
|
|
|
|
7 |
- imagenet
|
|
|
|
|
|
|
8 |
- vtab
|
9 |
---
|
10 |
|
11 |
+
# Data-efficient Image Transformer (tiny-sized model)
|
12 |
|
13 |
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
|
14 |
|