mobilenet_edgetpu_v2_m
weights w/ ra4
mnv4-small based recipe. 80.1% top-1 @ 224 and 80.7 @ 256.model | top1 | top1_err | top5 | top5_err | param_count | img_size |
---|---|---|---|---|---|---|
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k | 84.99 | 15.01 | 97.294 | 2.706 | 32.59 | 544 |
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k | 84.772 | 15.228 | 97.344 | 2.656 | 32.59 | 480 |
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k | 84.64 | 15.36 | 97.114 | 2.886 | 32.59 | 448 |
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k | 84.314 | 15.686 | 97.102 | 2.898 | 32.59 | 384 |
mobilenetv4_conv_aa_large.e600_r384_in1k | 83.824 | 16.176 | 96.734 | 3.266 | 32.59 | 480 |
mobilenetv4_conv_aa_large.e600_r384_in1k | 83.244 | 16.756 | 96.392 | 3.608 | 32.59 | 384 |
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k | 82.99 | 17.01 | 96.67 | 3.33 | 11.07 | 320 |
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k | 82.364 | 17.636 | 96.256 | 3.744 | 11.07 | 256 |
model | top1 | top1_err | top5 | top5_err | param_count | img_size |
---|---|---|---|---|---|---|
efficientnet_b0.ra4_e3600_r224_in1k | 79.364 | 20.636 | 94.754 | 5.246 | 5.29 | 256 |
efficientnet_b0.ra4_e3600_r224_in1k | 78.584 | 21.416 | 94.338 | 5.662 | 5.29 | 224 |
mobilenetv1_100h.ra4_e3600_r224_in1k | 76.596 | 23.404 | 93.272 | 6.728 | 5.28 | 256 |
mobilenetv1_100.ra4_e3600_r224_in1k | 76.094 | 23.906 | 93.004 | 6.996 | 4.23 | 256 |
mobilenetv1_100h.ra4_e3600_r224_in1k | 75.662 | 24.338 | 92.504 | 7.496 | 5.28 | 224 |
mobilenetv1_100.ra4_e3600_r224_in1k | 75.382 | 24.618 | 92.312 | 7.688 | 4.23 | 224 |
set_input_size()
added to vit and swin v1/v2 models to allow changing image size, patch size, window size after model creation.set_input_size
, always_partition
and strict_img_size
args have been added to __init__
to allow more flexible input size constraintstiny
< .5M param models for testing that are actually trained on ImageNet-1kmodel | top1 | top1_err | top5 | top5_err | param_count | img_size | crop_pct |
---|---|---|---|---|---|---|---|
test_efficientnet.r160_in1k | 47.156 | 52.844 | 71.726 | 28.274 | 0.36 | 192 | 1.0 |
test_byobnet.r160_in1k | 46.698 | 53.302 | 71.674 | 28.326 | 0.46 | 192 | 1.0 |
test_efficientnet.r160_in1k | 46.426 | 53.574 | 70.928 | 29.072 | 0.36 | 160 | 0.875 |
test_byobnet.r160_in1k | 45.378 | 54.622 | 70.572 | 29.428 | 0.46 | 160 | 0.875 |
test_vit.r160_in1k | 42.0 | 58.0 | 68.664 | 31.336 | 0.37 | 192 | 1.0 |
test_vit.r160_in1k | 40.822 | 59.178 | 67.212 | 32.788 | 0.37 | 160 | 0.875 |
model | top1 | top1_err | top5 | top5_err | param_count | img_size |
---|---|---|---|---|---|---|
mobilenetv4_hybrid_large.ix_e600_r384_in1k | 84.356 | 15.644 | 96.892 | 3.108 | 37.76 | 448 |
mobilenetv4_hybrid_large.ix_e600_r384_in1k | 83.990 | 16.010 | 96.702 | 3.298 | 37.76 | 384 |
mobilenetv4_hybrid_medium.ix_e550_r384_in1k | 83.394 | 16.606 | 96.760 | 3.240 | 11.07 | 448 |
mobilenetv4_hybrid_medium.ix_e550_r384_in1k | 82.968 | 17.032 | 96.474 | 3.526 | 11.07 | 384 |
mobilenetv4_hybrid_medium.ix_e550_r256_in1k | 82.492 | 17.508 | 96.278 | 3.722 | 11.07 | 320 |
mobilenetv4_hybrid_medium.ix_e550_r256_in1k | 81.446 | 18.554 | 95.704 | 4.296 | 11.07 | 256 |
timm
trained weights added:model | top1 | top1_err | top5 | top5_err | param_count | img_size |
---|---|---|---|---|---|---|
mobilenetv4_hybrid_large.e600_r384_in1k | 84.266 | 15.734 | 96.936 | 3.064 | 37.76 | 448 |
mobilenetv4_hybrid_large.e600_r384_in1k | 83.800 | 16.200 | 96.770 | 3.230 | 37.76 | 384 |
mobilenetv4_conv_large.e600_r384_in1k | 83.392 | 16.608 | 96.622 | 3.378 | 32.59 | 448 |
mobilenetv4_conv_large.e600_r384_in1k | 82.952 | 17.048 | 96.266 | 3.734 | 32.59 | 384 |
mobilenetv4_conv_large.e500_r256_in1k | 82.674 | 17.326 | 96.31 | 3.69 | 32.59 | 320 |
mobilenetv4_conv_large.e500_r256_in1k | 81.862 | 18.138 | 95.69 | 4.31 | 32.59 | 256 |
mobilenetv4_hybrid_medium.e500_r224_in1k | 81.276 | 18.724 | 95.742 | 4.258 | 11.07 | 256 |
mobilenetv4_conv_medium.e500_r256_in1k | 80.858 | 19.142 | 95.768 | 4.232 | 9.72 | 320 |
mobilenetv4_hybrid_medium.e500_r224_in1k | 80.442 | 19.558 | 95.38 | 4.62 | 11.07 | 224 |
mobilenetv4_conv_blur_medium.e500_r224_in1k | 80.142 | 19.858 | 95.298 | 4.702 | 9.72 | 256 |
mobilenetv4_conv_medium.e500_r256_in1k | 79.928 | 20.072 | 95.184 | 4.816 | 9.72 | 256 |
mobilenetv4_conv_medium.e500_r224_in1k | 79.808 | 20.192 | 95.186 | 4.814 | 9.72 | 256 |
mobilenetv4_conv_blur_medium.e500_r224_in1k | 79.438 | 20.562 | 94.932 | 5.068 | 9.72 | 224 |
mobilenetv4_conv_medium.e500_r224_in1k | 79.094 | 20.906 | 94.77 | 5.23 | 9.72 | 224 |
mobilenetv4_conv_small.e2400_r224_in1k | 74.616 | 25.384 | 92.072 | 7.928 | 3.77 | 256 |
mobilenetv4_conv_small.e1200_r224_in1k | 74.292 | 25.708 | 92.116 | 7.884 | 3.77 | 256 |
mobilenetv4_conv_small.e2400_r224_in1k | 73.756 | 26.244 | 91.422 | 8.578 | 3.77 | 224 |
mobilenetv4_conv_small.e1200_r224_in1k | 73.454 | 26.546 | 91.34 | 8.66 | 3.77 | 224 |
normalize=
flag for transorms, return non-normalized torch.Tensor with original dytpe (for chug
)Searching for Better ViT Baselines (For the GPU Poor)
weights and vit variants released. Exploring model shapes between Tiny and Base.timm
models. See example in https://github.com/huggingface/pytorch-image-models/discussions/1232#discussioncomment-9320949forward_intermediates()
API refined and added to more models including some ConvNets that have other extraction methods.features_only=True
feature extraction. Remaining 34 architectures can be supported but based on priority requests.features_only=True
support for ViT models with flat hidden states or non-std module layouts (so far covering 'vit_*', 'twins_*', 'deit*', 'beit*', 'mvitv2*', 'eva*', 'samvit_*', 'flexivit*'
)forward_intermediates()
API that can be used with a feature wrapping module or direclty.model = timm.create_model('vit_base_patch16_224')
final_feat, intermediates = model.forward_intermediates(input)
output = model.forward_head(final_feat) # pooling + classifier head
print(final_feat.shape)
torch.Size([2, 197, 768])
for f in intermediates:
print(f.shape)
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
print(output.shape)
torch.Size([2, 1000])
model = timm.create_model('eva02_base_patch16_clip_224', pretrained=True, img_size=512, features_only=True, out_indices=(-3, -2,))
output = model(torch.randn(2, 3, 512, 512))
for o in output:
print(o.shape)
torch.Size([2, 768, 32, 32])
torch.Size([2, 768, 32, 32])
Datasets & transform refactoring
--dataset hfids:org/dataset
)datasets
and webdataset wrapper streaming from HF hub with recent timm
ImageNet uploads to https://huggingface.co/timm--input-size 1 224 224
or --in-chans 1
, sets PIL image conversion appropriately in dataset--val-split ''
) in train script--bce-sum
(sum over class dim) and --bce-pos-weight
(positive weighting) args for training as they’re common BCE loss tweaks I was often hard codingmodel_args
config entry. model_args
will be passed as kwargs through to models on creation.vision_transformer.py
typing and doc cleanup by Laureηtquickgelu
ViT variants for OpenAI, DFN, MetaCLIP weights that use it (less efficient)convnext_xxlarge
vision_transformer.py
.vision_transformer.py
, vision_transformer_hybrid.py
, deit.py
, and eva.py
w/o breaking backward compat.dynamic_img_size=True
to args at model creation time to allow changing the grid size (interpolate abs and/or ROPE pos embed each forward pass).dynamic_img_pad=True
to allow image sizes that aren’t divisible by patch size (pad bottom right to patch size each forward pass).img_size
(interpolate pretrained embed weights once) on creation still works.patch_size
(resize pretrained patch_embed weights once) on creation still works.python validate.py /imagenet --model vit_base_patch16_224 --amp --amp-dtype bfloat16 --img-size 255 --crop-pct 1.0 --model-kwargs dynamic_img_size=True dyamic_img_pad=True
--reparam
arg to benchmark.py
, onnx_export.py
, and validate.py
to trigger layer reparameterization / fusion for models with any one of reparameterize()
, switch_to_deploy()
or fuse()
python validate.py /imagenet --model swin_base_patch4_window7_224.ms_in22k_ft_in1k --amp --amp-dtype bfloat16 --input-size 3 256 320 --model-kwargs window_size=8,10 img_size=256,320
selecsls*
model naming regressionseresnextaa201d_32x8d.sw_in12k_ft_in1k_384
weights (and .sw_in12k
pretrain) with 87.3% top-1 on ImageNet-1k, best ImageNet ResNet family model I’m aware of.timm
0.9 released, transition from 0.8.xdev releasestimm
get_intermediate_layers
function on vit/deit models for grabbing hidden states (inspired by DINO impl). This is WIP and may change significantly… feedback welcome.pretrained=True
and no weights exist (instead of continuing with random initialization)bnb
prefix, ie bnbadam8bit
timm
out of pre-release statetimm
models uploaded to HF Hub and almost all updated to support multi-weight pretrained configs--grad-accum-steps
), thanks Taeksang Kim--head-init-scale
and --head-init-bias
to train.py to scale classiifer head and set fixed bias for fine-tuneinplace_abn
) use, replaced use in tresnet with standard BatchNorm (modified weights accordingly).drop_rate
(classifier dropout), proj_drop_rate
(block mlp / out projections), pos_drop_rate
(position embedding drop), attn_drop_rate
(attention dropout). Also add patch dropout (FLIP) to vit and eva models.timm
trained weights added with recipe based tags to differentiateresnetaa50d.sw_in12k_ft_in1k
- 81.7 @ 224, 82.6 @ 288resnetaa101d.sw_in12k_ft_in1k
- 83.5 @ 224, 84.1 @ 288seresnextaa101d_32x8d.sw_in12k_ft_in1k
- 86.0 @ 224, 86.5 @ 288seresnextaa101d_32x8d.sw_in12k_ft_in1k_288
- 86.5 @ 288, 86.7 @ 320model | top1 | top5 | img_size | param_count | gmacs | macts |
---|---|---|---|---|---|---|
convnext_xxlarge.clip_laion2b_soup_ft_in1k | 88.612 | 98.704 | 256 | 846.47 | 198.09 | 124.45 |
convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384 | 88.312 | 98.578 | 384 | 200.13 | 101.11 | 126.74 |
convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320 | 87.968 | 98.47 | 320 | 200.13 | 70.21 | 88.02 |
convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384 | 87.138 | 98.212 | 384 | 88.59 | 45.21 | 84.49 |
convnext_base.clip_laion2b_augreg_ft_in12k_in1k | 86.344 | 97.97 | 256 | 88.59 | 20.09 | 37.55 |
model | top1 | top5 | param_count | img_size |
---|---|---|---|---|
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k | 90.054 | 99.042 | 305.08 | 448 |
eva02_large_patch14_448.mim_in22k_ft_in22k_in1k | 89.946 | 99.01 | 305.08 | 448 |
eva_giant_patch14_560.m30m_ft_in22k_in1k | 89.792 | 98.992 | 1014.45 | 560 |
eva02_large_patch14_448.mim_in22k_ft_in1k | 89.626 | 98.954 | 305.08 | 448 |
eva02_large_patch14_448.mim_m38m_ft_in1k | 89.57 | 98.918 | 305.08 | 448 |
eva_giant_patch14_336.m30m_ft_in22k_in1k | 89.56 | 98.956 | 1013.01 | 336 |
eva_giant_patch14_336.clip_ft_in1k | 89.466 | 98.82 | 1013.01 | 336 |
eva_large_patch14_336.in22k_ft_in22k_in1k | 89.214 | 98.854 | 304.53 | 336 |
eva_giant_patch14_224.clip_ft_in1k | 88.882 | 98.678 | 1012.56 | 224 |
eva02_base_patch14_448.mim_in22k_ft_in22k_in1k | 88.692 | 98.722 | 87.12 | 448 |
eva_large_patch14_336.in22k_ft_in1k | 88.652 | 98.722 | 304.53 | 336 |
eva_large_patch14_196.in22k_ft_in22k_in1k | 88.592 | 98.656 | 304.14 | 196 |
eva02_base_patch14_448.mim_in22k_ft_in1k | 88.23 | 98.564 | 87.12 | 448 |
eva_large_patch14_196.in22k_ft_in1k | 87.934 | 98.504 | 304.14 | 196 |
eva02_small_patch14_336.mim_in22k_ft_in1k | 85.74 | 97.614 | 22.13 | 336 |
eva02_tiny_patch14_336.mim_in22k_ft_in1k | 80.658 | 95.524 | 5.76 | 336 |
regnet.py
, rexnet.py
, byobnet.py
, resnetv2.py
, swin_transformer.py
, swin_transformer_v2.py
, swin_transformer_v2_cr.py
swinv2_cr_*
, and NHWC for all others) and spatial embedding outputs.timm
weights:rexnetr_200.sw_in12k_ft_in1k
- 82.6 @ 224, 83.2 @ 288rexnetr_300.sw_in12k_ft_in1k
- 84.0 @ 224, 84.5 @ 288regnety_120.sw_in12k_ft_in1k
- 85.0 @ 224, 85.4 @ 288regnety_160.lion_in12k_ft_in1k
- 85.6 @ 224, 86.0 @ 288regnety_160.sw_in12k_ft_in1k
- 85.6 @ 224, 86.0 @ 288 (compare to SWAG PT + 1k FT this is same BUT much lower res, blows SEER FT away)convnext_xxlarge
default LayerNorm eps to 1e-5 (for CLIP weights, improved stability)convnext_large_mlp.clip_laion2b_ft_320
and convnext_lage_mlp.clip_laion2b_ft_soup_320
CLIP image tower weights for features & fine-tunesafetensor
checkpoint support added,
vit_relpos,
coatnet/
maxxvit` (to start)features_only=True
convnext_base.clip_laion2b_augreg_ft_in1k
- 86.2% @ 256x256convnext_base.clip_laiona_augreg_ft_in1k_384
- 86.5% @ 384x384convnext_large_mlp.clip_laion2b_augreg_ft_in1k
- 87.3% @ 256x256convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384
- 87.9% @ 384x384features_only=True
. Adapted from https://github.com/dingmyu/davit by Fredo.features_only=True
.features_only=True
support to new conv
variants, weight remap required./results
to timm/data/_info
.timm
inference.py
to use, try: python inference.py /folder/to/images --model convnext_small.in12k --label-type detail --topk 5
Add two convnext 12k -> 1k fine-tunes at 384x384
convnext_tiny.in12k_ft_in1k_384
- 85.1 @ 384convnext_small.in12k_ft_in1k_384
- 86.2 @ 384Push all MaxxViT weights to HF hub, and add new ImageNet-12k -> 1k fine-tunes for rw
base MaxViT and CoAtNet 1/2 models
model | top1 | top5 | samples / sec | Params (M) | GMAC | Act (M) |
---|---|---|---|---|---|---|
maxvit_xlarge_tf_512.in21k_ft_in1k | 88.53 | 98.64 | 21.76 | 475.77 | 534.14 | 1413.22 |
maxvit_xlarge_tf_384.in21k_ft_in1k | 88.32 | 98.54 | 42.53 | 475.32 | 292.78 | 668.76 |
maxvit_base_tf_512.in21k_ft_in1k | 88.20 | 98.53 | 50.87 | 119.88 | 138.02 | 703.99 |
maxvit_large_tf_512.in21k_ft_in1k | 88.04 | 98.40 | 36.42 | 212.33 | 244.75 | 942.15 |
maxvit_large_tf_384.in21k_ft_in1k | 87.98 | 98.56 | 71.75 | 212.03 | 132.55 | 445.84 |
maxvit_base_tf_384.in21k_ft_in1k | 87.92 | 98.54 | 104.71 | 119.65 | 73.80 | 332.90 |
maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k | 87.81 | 98.37 | 106.55 | 116.14 | 70.97 | 318.95 |
maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k | 87.47 | 98.37 | 149.49 | 116.09 | 72.98 | 213.74 |
coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k | 87.39 | 98.31 | 160.80 | 73.88 | 47.69 | 209.43 |
maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k | 86.89 | 98.02 | 375.86 | 116.14 | 23.15 | 92.64 |
maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k | 86.64 | 98.02 | 501.03 | 116.09 | 24.20 | 62.77 |
maxvit_base_tf_512.in1k | 86.60 | 97.92 | 50.75 | 119.88 | 138.02 | 703.99 |
coatnet_2_rw_224.sw_in12k_ft_in1k | 86.57 | 97.89 | 631.88 | 73.87 | 15.09 | 49.22 |
maxvit_large_tf_512.in1k | 86.52 | 97.88 | 36.04 | 212.33 | 244.75 | 942.15 |
coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k | 86.49 | 97.90 | 620.58 | 73.88 | 15.18 | 54.78 |
maxvit_base_tf_384.in1k | 86.29 | 97.80 | 101.09 | 119.65 | 73.80 | 332.90 |
maxvit_large_tf_384.in1k | 86.23 | 97.69 | 70.56 | 212.03 | 132.55 | 445.84 |
maxvit_small_tf_512.in1k | 86.10 | 97.76 | 88.63 | 69.13 | 67.26 | 383.77 |
maxvit_tiny_tf_512.in1k | 85.67 | 97.58 | 144.25 | 31.05 | 33.49 | 257.59 |
maxvit_small_tf_384.in1k | 85.54 | 97.46 | 188.35 | 69.02 | 35.87 | 183.65 |
maxvit_tiny_tf_384.in1k | 85.11 | 97.38 | 293.46 | 30.98 | 17.53 | 123.42 |
maxvit_large_tf_224.in1k | 84.93 | 96.97 | 247.71 | 211.79 | 43.68 | 127.35 |
coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k | 84.90 | 96.96 | 1025.45 | 41.72 | 8.11 | 40.13 |
maxvit_base_tf_224.in1k | 84.85 | 96.99 | 358.25 | 119.47 | 24.04 | 95.01 |
maxxvit_rmlp_small_rw_256.sw_in1k | 84.63 | 97.06 | 575.53 | 66.01 | 14.67 | 58.38 |
coatnet_rmlp_2_rw_224.sw_in1k | 84.61 | 96.74 | 625.81 | 73.88 | 15.18 | 54.78 |
maxvit_rmlp_small_rw_224.sw_in1k | 84.49 | 96.76 | 693.82 | 64.90 | 10.75 | 49.30 |
maxvit_small_tf_224.in1k | 84.43 | 96.83 | 647.96 | 68.93 | 11.66 | 53.17 |
maxvit_rmlp_tiny_rw_256.sw_in1k | 84.23 | 96.78 | 807.21 | 29.15 | 6.77 | 46.92 |
coatnet_1_rw_224.sw_in1k | 83.62 | 96.38 | 989.59 | 41.72 | 8.04 | 34.60 |
maxvit_tiny_rw_224.sw_in1k | 83.50 | 96.50 | 1100.53 | 29.06 | 5.11 | 33.11 |
maxvit_tiny_tf_224.in1k | 83.41 | 96.59 | 1004.94 | 30.92 | 5.60 | 35.78 |
coatnet_rmlp_1_rw_224.sw_in1k | 83.36 | 96.45 | 1093.03 | 41.69 | 7.85 | 35.47 |
maxxvitv2_nano_rw_256.sw_in1k | 83.11 | 96.33 | 1276.88 | 23.70 | 6.26 | 23.05 |
maxxvit_rmlp_nano_rw_256.sw_in1k | 83.03 | 96.34 | 1341.24 | 16.78 | 4.37 | 26.05 |
maxvit_rmlp_nano_rw_256.sw_in1k | 82.96 | 96.26 | 1283.24 | 15.50 | 4.47 | 31.92 |
maxvit_nano_rw_256.sw_in1k | 82.93 | 96.23 | 1218.17 | 15.45 | 4.46 | 30.28 |
coatnet_bn_0_rw_224.sw_in1k | 82.39 | 96.19 | 1600.14 | 27.44 | 4.67 | 22.04 |
coatnet_0_rw_224.sw_in1k | 82.39 | 95.84 | 1831.21 | 27.44 | 4.43 | 18.73 |
coatnet_rmlp_nano_rw_224.sw_in1k | 82.05 | 95.87 | 2109.09 | 15.15 | 2.62 | 20.34 |
coatnext_nano_rw_224.sw_in1k | 81.95 | 95.92 | 2525.52 | 14.70 | 2.47 | 12.80 |
coatnet_nano_rw_224.sw_in1k | 81.70 | 95.64 | 2344.52 | 15.14 | 2.41 | 15.41 |
maxvit_rmlp_pico_rw_256.sw_in1k | 80.53 | 95.21 | 1594.71 | 7.52 | 1.85 | 24.86 |
.in12k
tags)convnext_nano.in12k_ft_in1k
- 82.3 @ 224, 82.9 @ 288 (previously released)convnext_tiny.in12k_ft_in1k
- 84.2 @ 224, 84.5 @ 288convnext_small.in12k_ft_in1k
- 85.2 @ 224, 85.3 @ 288--model-kwargs
and --opt-kwargs
to scripts to pass through rare args directly to model classes from cmd linetrain.py /imagenet --model resnet50 --amp --model-kwargs output_stride=16 act_layer=silu
train.py /imagenet --model vit_base_patch16_clip_224 --img-size 240 --amp --model-kwargs img_size=240 patch_size=12
convnext.py
efficientnet_b5.in12k_ft_in1k
- 85.9 @ 448x448vit_medium_patch16_gap_384.in12k_ft_in1k
- 85.5 @ 384x384vit_medium_patch16_gap_256.in12k_ft_in1k
- 84.5 @ 256x256convnext_nano.in12k_ft_in1k
- 82.9 @ 288x288vision_transformer.py
, MAE style ViT-L/14 MIM pretrain w/ EVA-CLIP targets, FT on ImageNet-1k (w/ ImageNet-22k intermediate for some)model | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
eva_large_patch14_336.in22k_ft_in22k_in1k | 89.2 | 304.5 | 191.1 | 270.2 | link |
eva_large_patch14_336.in22k_ft_in1k | 88.7 | 304.5 | 191.1 | 270.2 | link |
eva_large_patch14_196.in22k_ft_in22k_in1k | 88.6 | 304.1 | 61.6 | 63.5 | link |
eva_large_patch14_196.in22k_ft_in1k | 87.9 | 304.1 | 61.6 | 63.5 | link |
beit.py
.model | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
eva_giant_patch14_560.m30m_ft_in22k_in1k | 89.8 | 1014.4 | 1906.8 | 2577.2 | link |
eva_giant_patch14_336.m30m_ft_in22k_in1k | 89.6 | 1013 | 620.6 | 550.7 | link |
eva_giant_patch14_336.clip_ft_in1k | 89.4 | 1013 | 620.6 | 550.7 | link |
eva_giant_patch14_224.clip_ft_in1k | 89.1 | 1012.6 | 267.2 | 192.6 | link |
0.8.0dev0
) of multi-weight support (model_arch.pretrained_tag
). Install with pip install --pre timm
--torchcompile
argumentmodel | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
vit_huge_patch14_clip_336.laion2b_ft_in12k_in1k | 88.6 | 632.5 | 391 | 407.5 | link |
vit_large_patch14_clip_336.openai_ft_in12k_in1k | 88.3 | 304.5 | 191.1 | 270.2 | link |
vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k | 88.2 | 632 | 167.4 | 139.4 | link |
vit_large_patch14_clip_336.laion2b_ft_in12k_in1k | 88.2 | 304.5 | 191.1 | 270.2 | link |
vit_large_patch14_clip_224.openai_ft_in12k_in1k | 88.2 | 304.2 | 81.1 | 88.8 | link |
vit_large_patch14_clip_224.laion2b_ft_in12k_in1k | 87.9 | 304.2 | 81.1 | 88.8 | link |
vit_large_patch14_clip_224.openai_ft_in1k | 87.9 | 304.2 | 81.1 | 88.8 | link |
vit_large_patch14_clip_336.laion2b_ft_in1k | 87.9 | 304.5 | 191.1 | 270.2 | link |
vit_huge_patch14_clip_224.laion2b_ft_in1k | 87.6 | 632 | 167.4 | 139.4 | link |
vit_large_patch14_clip_224.laion2b_ft_in1k | 87.3 | 304.2 | 81.1 | 88.8 | link |
vit_base_patch16_clip_384.laion2b_ft_in12k_in1k | 87.2 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_384.openai_ft_in12k_in1k | 87 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_384.laion2b_ft_in1k | 86.6 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_384.openai_ft_in1k | 86.2 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_224.laion2b_ft_in12k_in1k | 86.2 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch16_clip_224.openai_ft_in12k_in1k | 85.9 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch32_clip_448.laion2b_ft_in12k_in1k | 85.8 | 88.3 | 17.9 | 23.9 | link |
vit_base_patch16_clip_224.laion2b_ft_in1k | 85.5 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch32_clip_384.laion2b_ft_in12k_in1k | 85.4 | 88.3 | 13.1 | 16.5 | link |
vit_base_patch16_clip_224.openai_ft_in1k | 85.3 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch32_clip_384.openai_ft_in12k_in1k | 85.2 | 88.3 | 13.1 | 16.5 | link |
vit_base_patch32_clip_224.laion2b_ft_in12k_in1k | 83.3 | 88.2 | 4.4 | 5 | link |
vit_base_patch32_clip_224.laion2b_ft_in1k | 82.6 | 88.2 | 4.4 | 5 | link |
vit_base_patch32_clip_224.openai_ft_in1k | 81.9 | 88.2 | 4.4 | 5 | link |
model | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
maxvit_xlarge_tf_512.in21k_ft_in1k | 88.5 | 475.8 | 534.1 | 1413.2 | link |
maxvit_xlarge_tf_384.in21k_ft_in1k | 88.3 | 475.3 | 292.8 | 668.8 | link |
maxvit_base_tf_512.in21k_ft_in1k | 88.2 | 119.9 | 138 | 704 | link |
maxvit_large_tf_512.in21k_ft_in1k | 88 | 212.3 | 244.8 | 942.2 | link |
maxvit_large_tf_384.in21k_ft_in1k | 88 | 212 | 132.6 | 445.8 | link |
maxvit_base_tf_384.in21k_ft_in1k | 87.9 | 119.6 | 73.8 | 332.9 | link |
maxvit_base_tf_512.in1k | 86.6 | 119.9 | 138 | 704 | link |
maxvit_large_tf_512.in1k | 86.5 | 212.3 | 244.8 | 942.2 | link |
maxvit_base_tf_384.in1k | 86.3 | 119.6 | 73.8 | 332.9 | link |
maxvit_large_tf_384.in1k | 86.2 | 212 | 132.6 | 445.8 | link |
maxvit_small_tf_512.in1k | 86.1 | 69.1 | 67.3 | 383.8 | link |
maxvit_tiny_tf_512.in1k | 85.7 | 31 | 33.5 | 257.6 | link |
maxvit_small_tf_384.in1k | 85.5 | 69 | 35.9 | 183.6 | link |
maxvit_tiny_tf_384.in1k | 85.1 | 31 | 17.5 | 123.4 | link |
maxvit_large_tf_224.in1k | 84.9 | 211.8 | 43.7 | 127.4 | link |
maxvit_base_tf_224.in1k | 84.9 | 119.5 | 24 | 95 | link |
maxvit_small_tf_224.in1k | 84.4 | 68.9 | 11.7 | 53.2 | link |
maxvit_tiny_tf_224.in1k | 83.4 | 30.9 | 5.6 | 35.8 | link |
--amp-impl apex
, bfloat16 supportedf via --amp-dtype bfloat16
maxxvit
series, incl first ConvNeXt block based coatnext
and maxxvit
experiments:coatnext_nano_rw_224
- 82.0 @ 224 (G) — (uses ConvNeXt conv block, no BatchNorm)maxxvit_rmlp_nano_rw_256
- 83.0 @ 256, 83.7 @ 320 (G) (uses ConvNeXt conv block, no BN)maxvit_rmlp_small_rw_224
- 84.5 @ 224, 85.1 @ 320 (G)maxxvit_rmlp_small_rw_256
- 84.6 @ 256, 84.9 @ 288 (G) — could be trained better, hparams need tuning (uses ConvNeXt block, no BN)coatnet_rmlp_2_rw_224
- 84.6 @ 224, 85 @ 320 (T)timm
docs home now exists, look for more here in the futuremaxxvit
series incl a pico
(7.5M params, 1.9 GMACs), two tiny
variants:maxvit_rmlp_pico_rw_256
- 80.5 @ 256, 81.3 @ 320 (T)maxvit_tiny_rw_224
- 83.5 @ 224 (G)maxvit_rmlp_tiny_rw_256
- 84.2 @ 256, 84.8 @ 320 (T)maxvit_rmlp_nano_rw_256
- 83.0 @ 256, 83.6 @ 320 (T)timm
original modelsmaxxvit.py
model def, contains numerous experiments outside scope of original paperscoatnet_nano_rw_224
- 81.7 @ 224 (T)coatnet_rmlp_nano_rw_224
- 82.0 @ 224, 82.8 @ 320 (T)coatnet_0_rw_224
- 82.4 (T) — NOTE timm ‘0’ coatnets have 2 more 3rd stage blockscoatnet_bn_0_rw_224
- 82.4 (T)maxvit_nano_rw_256
- 82.9 @ 256 (T)coatnet_rmlp_1_rw_224
- 83.4 @ 224, 84 @ 320 (T)coatnet_1_rw_224
- 83.6 @ 224 (G)bits_and_tpu
branch training code, (G) = GPU trainedtimm
re-write for license purposes)convnext_atto
- 75.7 @ 224, 77.0 @ 288convnext_atto_ols
- 75.9 @ 224, 77.2 @ 288convnext_femto
- 77.5 @ 224, 78.7 @ 288convnext_femto_ols
- 77.9 @ 224, 78.9 @ 288convnext_pico
- 79.5 @ 224, 80.4 @ 288convnext_pico_ols
- 79.5 @ 224, 80.5 @ 288convnext_nano_ols
- 80.9 @ 224, 81.6 @ 288darknetaa53
- 79.8 @ 256, 80.5 @ 288convnext_nano
- 80.8 @ 224, 81.5 @ 288cs3sedarknet_l
- 81.2 @ 256, 81.8 @ 288cs3darknet_x
- 81.8 @ 256, 82.2 @ 288cs3sedarknet_x
- 82.2 @ 256, 82.7 @ 288cs3edgenet_x
- 82.2 @ 256, 82.7 @ 288cs3se_edgenet_x
- 82.8 @ 256, 83.5 @ 320cs3*
weights above all trained on TPU w/ bits_and_tpu
branch. Thanks to TRC program!More models, more fixes
ResNet
defs added by request with 1 block repeats for both basic and bottleneck (resnet10 and resnet14)CspNet
refactored with dataclass config, simplified CrossStage3 (cs3
) option. These are closer to YOLO-v5+ backbone defs.srelpos
(shared relative position) models trained, and a medium w/ class token.small
model. Better than original small, but not their new USI trained weights.resnet10t
- 66.5 @ 176, 68.3 @ 224resnet14t
- 71.3 @ 176, 72.3 @ 224resnetaa50
- 80.6 @ 224 , 81.6 @ 288darknet53
- 80.0 @ 256, 80.5 @ 288cs3darknet_m
- 77.0 @ 256, 77.6 @ 288cs3darknet_focus_m
- 76.7 @ 256, 77.3 @ 288cs3darknet_l
- 80.4 @ 256, 80.9 @ 288cs3darknet_focus_l
- 80.3 @ 256, 80.9 @ 288vit_srelpos_small_patch16_224
- 81.1 @ 224, 82.1 @ 320vit_srelpos_medium_patch16_224
- 82.3 @ 224, 83.1 @ 320vit_relpos_small_patch16_cls_224
- 82.6 @ 224, 83.6 @ 320edgnext_small_rw
- 79.6 @ 224, 80.4 @ 320cs3
, darknet
, and vit_*relpos
weights above all trained on TPU thanks to TRC program! Rest trained on overheating GPUs.timm
datasets/readers. See (https://github.com/rwightman/pytorch-image-models/pull/1274#issuecomment-1178303103)F.layer_norm(x.permute(0, 2, 3, 1), ...).permute(0, 3, 1, 2)
via LayerNorm2d
in all cases.LayerNormExp2d
in models/layers/norm.py
timm
Swin-V2-CR impl, will likely do a bit more to bring parts closer to official and decide whether to merge some aspects.vit_relpos_small_patch16_224
- 81.5 @ 224, 82.5 @ 320 — rel pos, layer scale, no class token, avg poolvit_relpos_medium_patch16_rpn_224
- 82.3 @ 224, 83.1 @ 320 — rel pos + res-post-norm, no class token, avg poolvit_relpos_medium_patch16_224
- 82.5 @ 224, 83.3 @ 320 — rel pos, layer scale, no class token, avg poolvit_relpos_base_patch16_gapcls_224
- 82.8 @ 224, 83.9 @ 320 — rel pos, layer scale, class token, avg pool (by mistake)vision_transformer_relpos.py
) and Residual Post-Norm branches (from Swin-V2) (vision_transformer*.py
)vit_relpos_base_patch32_plus_rpn_256
- 79.5 @ 256, 80.6 @ 320 — rel pos + extended width + res-post-norm, no class token, avg poolvit_relpos_base_patch16_224
- 82.5 @ 224, 83.6 @ 320 — rel pos, layer scale, no class token, avg poolvit_base_patch16_rpn_224
- 82.3 @ 224 — rel pos + res-post-norm, no class token, avg poolHow to Train Your ViT
)vit_*
models support removal of class token, use of global average pool, use of fc_norm (ala beit, mae).timm
models are now officially supported in fast.ai! Just in time for the new Practical Deep Learning course. timmdocs
documentation link updated to timm.fast.ai.seresnext101d_32x8d
- 83.69 @ 224, 84.35 @ 288seresnextaa101d_32x8d
(anti-aliased w/ AvgPool2d) - 83.85 @ 224, 84.57 @ 288ParallelBlock
and LayerScale
option to base vit models to support model configs in Three things everyone should know about ViTconvnext_tiny_hnf
(head norm first) weights trained with (close to) A2 recipe, 82.2% top-1, could do better with more epochs.norm_norm_norm
. IMPORTANT this update for a coming 0.6.x release will likely de-stabilize the master branch for a while. Branch 0.5.x
or a previous 0.5.x release can be used if stability is required.regnety_040
- 82.3 @ 224, 82.96 @ 288regnety_064
- 83.0 @ 224, 83.65 @ 288regnety_080
- 83.17 @ 224, 83.86 @ 288regnetv_040
- 82.44 @ 224, 83.18 @ 288 (timm pre-act)regnetv_064
- 83.1 @ 224, 83.71 @ 288 (timm pre-act)regnetz_040
- 83.67 @ 256, 84.25 @ 320regnetz_040h
- 83.77 @ 256, 84.5 @ 320 (w/ extra fc in head)resnetv2_50d_gn
- 80.8 @ 224, 81.96 @ 288 (pre-act GroupNorm)resnetv2_50d_evos
80.77 @ 224, 82.04 @ 288 (pre-act EvoNormS)regnetz_c16_evos
- 81.9 @ 256, 82.64 @ 320 (EvoNormS)regnetz_d8_evos
- 83.42 @ 256, 84.04 @ 320 (EvoNormS)xception41p
- 82 @ 299 (timm pre-act)xception65
- 83.17 @ 299xception65p
- 83.14 @ 299 (timm pre-act)resnext101_64x4d
- 82.46 @ 224, 83.16 @ 288seresnext101_32x8d
- 83.57 @ 224, 84.270 @ 288resnetrs200
- 83.85 @ 256, 84.44 @ 320forward_head(x, pre_logits=False)
fn added to all models to allow separate calls of forward_features
+ forward_head
foward_features
, for consistency with CNN models, token selection or pooling now applied in forward_head
timm
on his blog yesterday. Well worth a read. Getting Started with PyTorch Image Models (timm): A Practitioner’s Guidenorm_norm_norm
branch back to master (ver 0.6.x) in next week or so.pip install git+https://github.com/rwightman/pytorch-image-models
installs!0.5.x
releases and a 0.5.x
branch will remain stable with a cherry pick or two until dust clears. Recommend sticking to pypi install for a bit if you want stable.mnasnet_small
- 65.6 top-1mobilenetv2_050
- 65.9lcnet_100/075/050
- 72.1 / 68.8 / 63.1semnasnet_075
- 73fbnetv3_b/d/g
- 79.1 / 79.7 / 82.0convnext.py
efficientnet_b5.in12k_ft_in1k
- 85.9 @ 448x448vit_medium_patch16_gap_384.in12k_ft_in1k
- 85.5 @ 384x384vit_medium_patch16_gap_256.in12k_ft_in1k
- 84.5 @ 256x256convnext_nano.in12k_ft_in1k
- 82.9 @ 288x288vision_transformer.py
, MAE style ViT-L/14 MIM pretrain w/ EVA-CLIP targets, FT on ImageNet-1k (w/ ImageNet-22k intermediate for some)model | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
eva_large_patch14_336.in22k_ft_in22k_in1k | 89.2 | 304.5 | 191.1 | 270.2 | link |
eva_large_patch14_336.in22k_ft_in1k | 88.7 | 304.5 | 191.1 | 270.2 | link |
eva_large_patch14_196.in22k_ft_in22k_in1k | 88.6 | 304.1 | 61.6 | 63.5 | link |
eva_large_patch14_196.in22k_ft_in1k | 87.9 | 304.1 | 61.6 | 63.5 | link |
beit.py
. model | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
eva_giant_patch14_560.m30m_ft_in22k_in1k | 89.8 | 1014.4 | 1906.8 | 2577.2 | link |
eva_giant_patch14_336.m30m_ft_in22k_in1k | 89.6 | 1013 | 620.6 | 550.7 | link |
eva_giant_patch14_336.clip_ft_in1k | 89.4 | 1013 | 620.6 | 550.7 | link |
eva_giant_patch14_224.clip_ft_in1k | 89.1 | 1012.6 | 267.2 | 192.6 | link |
0.8.0dev0
) of multi-weight support (model_arch.pretrained_tag
). Install with pip install --pre timm
--torchcompile
argumentmodel | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
vit_huge_patch14_clip_336.laion2b_ft_in12k_in1k | 88.6 | 632.5 | 391 | 407.5 | link |
vit_large_patch14_clip_336.openai_ft_in12k_in1k | 88.3 | 304.5 | 191.1 | 270.2 | link |
vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k | 88.2 | 632 | 167.4 | 139.4 | link |
vit_large_patch14_clip_336.laion2b_ft_in12k_in1k | 88.2 | 304.5 | 191.1 | 270.2 | link |
vit_large_patch14_clip_224.openai_ft_in12k_in1k | 88.2 | 304.2 | 81.1 | 88.8 | link |
vit_large_patch14_clip_224.laion2b_ft_in12k_in1k | 87.9 | 304.2 | 81.1 | 88.8 | link |
vit_large_patch14_clip_224.openai_ft_in1k | 87.9 | 304.2 | 81.1 | 88.8 | link |
vit_large_patch14_clip_336.laion2b_ft_in1k | 87.9 | 304.5 | 191.1 | 270.2 | link |
vit_huge_patch14_clip_224.laion2b_ft_in1k | 87.6 | 632 | 167.4 | 139.4 | link |
vit_large_patch14_clip_224.laion2b_ft_in1k | 87.3 | 304.2 | 81.1 | 88.8 | link |
vit_base_patch16_clip_384.laion2b_ft_in12k_in1k | 87.2 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_384.openai_ft_in12k_in1k | 87 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_384.laion2b_ft_in1k | 86.6 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_384.openai_ft_in1k | 86.2 | 86.9 | 55.5 | 101.6 | link |
vit_base_patch16_clip_224.laion2b_ft_in12k_in1k | 86.2 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch16_clip_224.openai_ft_in12k_in1k | 85.9 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch32_clip_448.laion2b_ft_in12k_in1k | 85.8 | 88.3 | 17.9 | 23.9 | link |
vit_base_patch16_clip_224.laion2b_ft_in1k | 85.5 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch32_clip_384.laion2b_ft_in12k_in1k | 85.4 | 88.3 | 13.1 | 16.5 | link |
vit_base_patch16_clip_224.openai_ft_in1k | 85.3 | 86.6 | 17.6 | 23.9 | link |
vit_base_patch32_clip_384.openai_ft_in12k_in1k | 85.2 | 88.3 | 13.1 | 16.5 | link |
vit_base_patch32_clip_224.laion2b_ft_in12k_in1k | 83.3 | 88.2 | 4.4 | 5 | link |
vit_base_patch32_clip_224.laion2b_ft_in1k | 82.6 | 88.2 | 4.4 | 5 | link |
vit_base_patch32_clip_224.openai_ft_in1k | 81.9 | 88.2 | 4.4 | 5 | link |
model | top1 | param_count | gmac | macts | hub |
---|---|---|---|---|---|
maxvit_xlarge_tf_512.in21k_ft_in1k | 88.5 | 475.8 | 534.1 | 1413.2 | link |
maxvit_xlarge_tf_384.in21k_ft_in1k | 88.3 | 475.3 | 292.8 | 668.8 | link |
maxvit_base_tf_512.in21k_ft_in1k | 88.2 | 119.9 | 138 | 704 | link |
maxvit_large_tf_512.in21k_ft_in1k | 88 | 212.3 | 244.8 | 942.2 | link |
maxvit_large_tf_384.in21k_ft_in1k | 88 | 212 | 132.6 | 445.8 | link |
maxvit_base_tf_384.in21k_ft_in1k | 87.9 | 119.6 | 73.8 | 332.9 | link |
maxvit_base_tf_512.in1k | 86.6 | 119.9 | 138 | 704 | link |
maxvit_large_tf_512.in1k | 86.5 | 212.3 | 244.8 | 942.2 | link |
maxvit_base_tf_384.in1k | 86.3 | 119.6 | 73.8 | 332.9 | link |
maxvit_large_tf_384.in1k | 86.2 | 212 | 132.6 | 445.8 | link |
maxvit_small_tf_512.in1k | 86.1 | 69.1 | 67.3 | 383.8 | link |
maxvit_tiny_tf_512.in1k | 85.7 | 31 | 33.5 | 257.6 | link |
maxvit_small_tf_384.in1k | 85.5 | 69 | 35.9 | 183.6 | link |
maxvit_tiny_tf_384.in1k | 85.1 | 31 | 17.5 | 123.4 | link |
maxvit_large_tf_224.in1k | 84.9 | 211.8 | 43.7 | 127.4 | link |
maxvit_base_tf_224.in1k | 84.9 | 119.5 | 24 | 95 | link |
maxvit_small_tf_224.in1k | 84.4 | 68.9 | 11.7 | 53.2 | link |
maxvit_tiny_tf_224.in1k | 83.4 | 30.9 | 5.6 | 35.8 | link |
--amp-impl apex
, bfloat16 supportedf via --amp-dtype bfloat16
maxxvit
series, incl first ConvNeXt block based coatnext
and maxxvit
experiments:coatnext_nano_rw_224
- 82.0 @ 224 (G) — (uses ConvNeXt conv block, no BatchNorm)maxxvit_rmlp_nano_rw_256
- 83.0 @ 256, 83.7 @ 320 (G) (uses ConvNeXt conv block, no BN)maxvit_rmlp_small_rw_224
- 84.5 @ 224, 85.1 @ 320 (G)maxxvit_rmlp_small_rw_256
- 84.6 @ 256, 84.9 @ 288 (G) — could be trained better, hparams need tuning (uses ConvNeXt block, no BN)coatnet_rmlp_2_rw_224
- 84.6 @ 224, 85 @ 320 (T)timm
docs home now exists, look for more here in the futuremaxxvit
series incl a pico
(7.5M params, 1.9 GMACs), two tiny
variants:maxvit_rmlp_pico_rw_256
- 80.5 @ 256, 81.3 @ 320 (T)maxvit_tiny_rw_224
- 83.5 @ 224 (G)maxvit_rmlp_tiny_rw_256
- 84.2 @ 256, 84.8 @ 320 (T)maxvit_rmlp_nano_rw_256
- 83.0 @ 256, 83.6 @ 320 (T)timm
original modelsmaxxvit.py
model def, contains numerous experiments outside scope of original paperscoatnet_nano_rw_224
- 81.7 @ 224 (T)coatnet_rmlp_nano_rw_224
- 82.0 @ 224, 82.8 @ 320 (T)coatnet_0_rw_224
- 82.4 (T) — NOTE timm ‘0’ coatnets have 2 more 3rd stage blockscoatnet_bn_0_rw_224
- 82.4 (T)maxvit_nano_rw_256
- 82.9 @ 256 (T)coatnet_rmlp_1_rw_224
- 83.4 @ 224, 84 @ 320 (T)coatnet_1_rw_224
- 83.6 @ 224 (G)bits_and_tpu
branch training code, (G) = GPU trainedtimm
re-write for license purposes)convnext_atto
- 75.7 @ 224, 77.0 @ 288convnext_atto_ols
- 75.9 @ 224, 77.2 @ 288convnext_femto
- 77.5 @ 224, 78.7 @ 288convnext_femto_ols
- 77.9 @ 224, 78.9 @ 288convnext_pico
- 79.5 @ 224, 80.4 @ 288convnext_pico_ols
- 79.5 @ 224, 80.5 @ 288convnext_nano_ols
- 80.9 @ 224, 81.6 @ 288darknetaa53
- 79.8 @ 256, 80.5 @ 288convnext_nano
- 80.8 @ 224, 81.5 @ 288cs3sedarknet_l
- 81.2 @ 256, 81.8 @ 288cs3darknet_x
- 81.8 @ 256, 82.2 @ 288cs3sedarknet_x
- 82.2 @ 256, 82.7 @ 288cs3edgenet_x
- 82.2 @ 256, 82.7 @ 288cs3se_edgenet_x
- 82.8 @ 256, 83.5 @ 320cs3*
weights above all trained on TPU w/ bits_and_tpu
branch. Thanks to TRC program!More models, more fixes
ResNet
defs added by request with 1 block repeats for both basic and bottleneck (resnet10 and resnet14)CspNet
refactored with dataclass config, simplified CrossStage3 (cs3
) option. These are closer to YOLO-v5+ backbone defs.srelpos
(shared relative position) models trained, and a medium w/ class token.small
model. Better than original small, but not their new USI trained weights.resnet10t
- 66.5 @ 176, 68.3 @ 224resnet14t
- 71.3 @ 176, 72.3 @ 224resnetaa50
- 80.6 @ 224 , 81.6 @ 288darknet53
- 80.0 @ 256, 80.5 @ 288cs3darknet_m
- 77.0 @ 256, 77.6 @ 288cs3darknet_focus_m
- 76.7 @ 256, 77.3 @ 288cs3darknet_l
- 80.4 @ 256, 80.9 @ 288cs3darknet_focus_l
- 80.3 @ 256, 80.9 @ 288vit_srelpos_small_patch16_224
- 81.1 @ 224, 82.1 @ 320vit_srelpos_medium_patch16_224
- 82.3 @ 224, 83.1 @ 320vit_relpos_small_patch16_cls_224
- 82.6 @ 224, 83.6 @ 320edgnext_small_rw
- 79.6 @ 224, 80.4 @ 320cs3
, darknet
, and vit_*relpos
weights above all trained on TPU thanks to TRC program! Rest trained on overheating GPUs.timm
datasets/parsers. See (https://github.com/rwightman/pytorch-image-models/pull/1274#issuecomment-1178303103)F.layer_norm(x.permute(0, 2, 3, 1), ...).permute(0, 3, 1, 2)
via LayerNorm2d
in all cases. LayerNormExp2d
in models/layers/norm.py
timm
Swin-V2-CR impl, will likely do a bit more to bring parts closer to official and decide whether to merge some aspects.vit_relpos_small_patch16_224
- 81.5 @ 224, 82.5 @ 320 — rel pos, layer scale, no class token, avg poolvit_relpos_medium_patch16_rpn_224
- 82.3 @ 224, 83.1 @ 320 — rel pos + res-post-norm, no class token, avg poolvit_relpos_medium_patch16_224
- 82.5 @ 224, 83.3 @ 320 — rel pos, layer scale, no class token, avg poolvit_relpos_base_patch16_gapcls_224
- 82.8 @ 224, 83.9 @ 320 — rel pos, layer scale, class token, avg pool (by mistake)vision_transformer_relpos.py
) and Residual Post-Norm branches (from Swin-V2) (vision_transformer*.py
)vit_relpos_base_patch32_plus_rpn_256
- 79.5 @ 256, 80.6 @ 320 — rel pos + extended width + res-post-norm, no class token, avg poolvit_relpos_base_patch16_224
- 82.5 @ 224, 83.6 @ 320 — rel pos, layer scale, no class token, avg poolvit_base_patch16_rpn_224
- 82.3 @ 224 — rel pos + res-post-norm, no class token, avg poolHow to Train Your ViT
)vit_*
models support removal of class token, use of global average pool, use of fc_norm (ala beit, mae).timm
models are now officially supported in fast.ai! Just in time for the new Practical Deep Learning course. timmdocs
documentation link updated to timm.fast.ai.seresnext101d_32x8d
- 83.69 @ 224, 84.35 @ 288seresnextaa101d_32x8d
(anti-aliased w/ AvgPool2d) - 83.85 @ 224, 84.57 @ 288ParallelBlock
and LayerScale
option to base vit models to support model configs in Three things everyone should know about ViTconvnext_tiny_hnf
(head norm first) weights trained with (close to) A2 recipe, 82.2% top-1, could do better with more epochs.norm_norm_norm
. IMPORTANT this update for a coming 0.6.x release will likely de-stabilize the master branch for a while. Branch 0.5.x
or a previous 0.5.x release can be used if stability is required.regnety_040
- 82.3 @ 224, 82.96 @ 288regnety_064
- 83.0 @ 224, 83.65 @ 288regnety_080
- 83.17 @ 224, 83.86 @ 288regnetv_040
- 82.44 @ 224, 83.18 @ 288 (timm pre-act)regnetv_064
- 83.1 @ 224, 83.71 @ 288 (timm pre-act)regnetz_040
- 83.67 @ 256, 84.25 @ 320regnetz_040h
- 83.77 @ 256, 84.5 @ 320 (w/ extra fc in head)resnetv2_50d_gn
- 80.8 @ 224, 81.96 @ 288 (pre-act GroupNorm)resnetv2_50d_evos
80.77 @ 224, 82.04 @ 288 (pre-act EvoNormS)regnetz_c16_evos
- 81.9 @ 256, 82.64 @ 320 (EvoNormS)regnetz_d8_evos
- 83.42 @ 256, 84.04 @ 320 (EvoNormS)xception41p
- 82 @ 299 (timm pre-act)xception65
- 83.17 @ 299xception65p
- 83.14 @ 299 (timm pre-act)resnext101_64x4d
- 82.46 @ 224, 83.16 @ 288seresnext101_32x8d
- 83.57 @ 224, 84.270 @ 288resnetrs200
- 83.85 @ 256, 84.44 @ 320forward_head(x, pre_logits=False)
fn added to all models to allow separate calls of forward_features
+ forward_head
foward_features
, for consistency with CNN models, token selection or pooling now applied in forward_head
timm
on his blog yesterday. Well worth a read. Getting Started with PyTorch Image Models (timm): A Practitioner’s Guidenorm_norm_norm
branch back to master (ver 0.6.x) in next week or so.pip install git+https://github.com/rwightman/pytorch-image-models
installs!0.5.x
releases and a 0.5.x
branch will remain stable with a cherry pick or two until dust clears. Recommend sticking to pypi install for a bit if you want stable.mnasnet_small
- 65.6 top-1mobilenetv2_050
- 65.9lcnet_100/075/050
- 72.1 / 68.8 / 63.1semnasnet_075
- 73fbnetv3_b/d/g
- 79.1 / 79.7 / 82.0