timm
/

Image Classification
timm
PyTorch
Safetensors

not enough RAM to compute embeddings

#1
by TiSy - opened

Hallo,

I am a beginner at AI and image embeddings and I have been wanting to try out your model. However when I encode just a couple images my RAM quickly reaches it's limit. I have 32 GB of RAM (22GB which are actually avaiable) but it only takes a handful of images to encode to use up all of that. Any ideas what I am doing wrong or how I could work around that? This problem isn't very model specific as I have the same problem with vit_large_patch16_224.augreg_in21k_ft_in1k, vit_large_patch14_clip_224.openai_ft_in12k_in1k or convnext_xxlarge.clip_laion2b_soup_ft_in1k.

Sincerely Tim

Sign up or log in to comment