timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
361f429
1 Parent(s): 90551a3

Update model config and README

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -9,11 +9,10 @@ datasets:
9
  ---
10
  # Model card for mobilenetv3_large_100.ra4_e3600_r224_in1k
11
 
12
- A MobileNet-V4 image classification model. Trained on ImageNet-1k by Ross Wightman.
13
 
14
  Trained with `timm` scripts using hyper-parameters inspired by the MobileNet-V4 paper with `timm` enhancements.
15
 
16
- NOTE: So far, these are the only known MNV4 weights. Official weights for Tensorflow models are unreleased.
17
 
18
 
19
  ## Model Details
@@ -24,10 +23,10 @@ NOTE: So far, these are the only known MNV4 weights. Official weights for Tensor
24
  - Activations (M): 4.4
25
  - Image size: train = 224 x 224, test = 256 x 256
26
  - **Dataset:** ImageNet-1k
27
- - **Original:** https://github.com/tensorflow/models/tree/master/official/vision
28
  - **Papers:**
29
- - MobileNetV4 -- Universal Models for the Mobile Ecosystem: https://arxiv.org/abs/2404.10518
30
  - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
 
 
31
 
32
  ## Model Usage
33
  ### Image Classification
@@ -177,14 +176,6 @@ output = model.forward_head(output, pre_logits=True)
177
 
178
  ## Citation
179
  ```bibtex
180
- @article{qin2024mobilenetv4,
181
- title={MobileNetV4-Universal Models for the Mobile Ecosystem},
182
- author={Qin, Danfeng and Leichner, Chas and Delakis, Manolis and Fornoni, Marco and Luo, Shixin and Yang, Fan and Wang, Weijun and Banbury, Colby and Ye, Chengxi and Akin, Berkin and others},
183
- journal={arXiv preprint arXiv:2404.10518},
184
- year={2024}
185
- }
186
- ```
187
- ```bibtex
188
  @misc{rw2019timm,
189
  author = {Ross Wightman},
190
  title = {PyTorch Image Models},
@@ -195,3 +186,20 @@ output = model.forward_head(output, pre_logits=True)
195
  howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
196
  }
197
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
  # Model card for mobilenetv3_large_100.ra4_e3600_r224_in1k
11
 
12
+ A MobileNet-V3 image classification model. Trained on ImageNet-1k by Ross Wightman.
13
 
14
  Trained with `timm` scripts using hyper-parameters inspired by the MobileNet-V4 paper with `timm` enhancements.
15
 
 
16
 
17
 
18
  ## Model Details
 
23
  - Activations (M): 4.4
24
  - Image size: train = 224 x 224, test = 256 x 256
25
  - **Dataset:** ImageNet-1k
 
26
  - **Papers:**
 
27
  - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
28
+ - Searching for MobileNetV3: https://arxiv.org/abs/1905.02244
29
+ - MobileNetV4 -- Universal Models for the Mobile Ecosystem: https://arxiv.org/abs/2404.10518
30
 
31
  ## Model Usage
32
  ### Image Classification
 
176
 
177
  ## Citation
178
  ```bibtex
 
 
 
 
 
 
 
 
179
  @misc{rw2019timm,
180
  author = {Ross Wightman},
181
  title = {PyTorch Image Models},
 
186
  howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
187
  }
188
  ```
189
+ ```bibtex
190
+ @inproceedings{howard2019searching,
191
+ title={Searching for mobilenetv3},
192
+ author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others},
193
+ booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
194
+ pages={1314--1324},
195
+ year={2019}
196
+ }
197
+ ```
198
+ ```bibtex
199
+ @article{qin2024mobilenetv4,
200
+ title={MobileNetV4-Universal Models for the Mobile Ecosystem},
201
+ author={Qin, Danfeng and Leichner, Chas and Delakis, Manolis and Fornoni, Marco and Luo, Shixin and Yang, Fan and Wang, Weijun and Banbury, Colby and Ye, Chengxi and Akin, Berkin and others},
202
+ journal={arXiv preprint arXiv:2404.10518},
203
+ year={2024}
204
+ }
205
+ ```