Edit model card

[Paper] [GitHub]

FARE CLIP ViT-L/14 model.

Unsupervised adversarial fine-tuning from Openai CLIP initialization on ImageNet with infinity-norm and radius 4/255.

Usage

model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:chs20/fare4-clip')

Citation

If you find this model useful, please consider citing our paper:

@article{schlarmann2024robustclip,
    title={Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models}, 
    author={Christian Schlarmann and Naman Deep Singh and Francesco Croce and Matthias Hein},
    year={2024},
    journal={ICML}
}
Downloads last month
108
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including chs20/fare4-clip