Transformers
PyTorch
clip
Inference Endpoints
Edit model card

Model Summary

NLLB-CLIP is a model that combines a text encoder from the NLLB model and an image encoder from the LAION CLIP. This allows us to extend the model capabilities to 201 languages of the Flores-200. NLLB-CLIP sets state-of-the-art on the Crossmodal-3600 dataset by performing very well on low-resource languages. You can find more details about the model in the paper.

How to use

The model repo contains the model code files that allow the use of NLLB-CLIP as any other model from the hub. The interface is also compatible with CLIP models. Example code is below:

from transformers import AutoTokenizer, CLIPProcessor
import requests
from PIL import Image

from modeling_nllb_clip import NLLBCLIPModel # local file from the repo

processor = CLIPProcessor.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K")
processor = processor.image_processor
tokenizer = AutoTokenizer.from_pretrained(
    "facebook/nllb-200-distilled-1.3B"
)
image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(image_path, stream=True).raw)
image_inputs = processor(images=image, return_tensors="pt")
text_inputs = tokenizer(
    ["cat", "dog", "butterfly"],
    padding="longest",
    return_tensors="pt",
)

hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-large")

outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)

Acknowledgements

I thank Lambda Cloud for providing compute resources to train the model.

Downloads last month
8
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train visheratin/nllb-clip-large