metadata
datasets:
- UCSC-VLAA/Recap-DataComp-1B
language:
- en
library_name: peft
tags:
- florence-2
- lora
- adapter
- image-captioning
- peft
Florence-2 Recap-DataComp LoRA Adapter
This repository contains a LoRA adapter trained on the UCSC-VLAA/Recap-DataComp-1B dataset for the Florence-2-base-FT model. It's designed to enhance the model's captioning capabilities, providing more detailed and descriptive image captions.
Usage
To use this LoRA adapter, you'll need to load it along with the Florence-2-base model using the PEFT library. Here's an example of how to use it:
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
import requests
def caption(image):
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True)
prompt = "<MORE_DETAILED_CAPTION>"
adapter_name = "NikshepShetty/Florence-2-Recap-DataComp"
model = PeftModel.from_pretrained(base_model, adapter_name, trust_remote_code=True)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
do_sample=False,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<MORE_DETAILED_CAPTION>", image_size=(image.width, image.height))
print(parsed_answer)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
caption(image)
This code demonstrates how to:
- Load the base Florence-2 model
- Load the LoRA adapter
- Process an image and generate a detailed caption
Note: Make sure you have the required libraries installed: transformers, peft, einops, flash_attn, timm, Pillow, and requests.