|
--- |
|
library_name: transformers |
|
metrics: |
|
- meteor |
|
base_model: |
|
- meta-llama/Llama-3.2-11B-Vision-Instruct |
|
--- |
|
|
|
# Model Card |
|
|
|
|
|
- **Developed by:** [Genloop.ai](https://huggingface.co/genloop) |
|
- **Funded by:** [Genloop Labs, Inc.](https://genloop.ai/) |
|
- **Model type:** Vision Language Model (VLM) |
|
- **Finetuned from model:** [Meta Llama 3.2 11B Vision Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) |
|
- **Usage:** This model is intended for product cataloging, i.e. generating product descriptions from images |
|
|
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
Make sure to update your transformers installation via `pip install --upgrade transformers`. |
|
|
|
```python |
|
import requests |
|
import torch |
|
from PIL import Image |
|
from transformers import MllamaForConditionalGeneration, AutoProcessor |
|
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct" |
|
model = MllamaForConditionalGeneration.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto", |
|
) |
|
processor = AutoProcessor.from_pretrained(model_id) |
|
url = "insert_your_image_link_here" |
|
image = Image.open(requests.get(url, stream=True).raw) |
|
user_prompt= """Create a SHORT Product description based on the provided a given ##PRODUCT NAME## and a ##CATEGORY## and an image of the product. |
|
Only return description. The description should be SEO optimized and for a better mobile search experience. |
|
|
|
##PRODUCT NAME##: {product_name} |
|
##CATEGORY##: {prod_category}""" |
|
|
|
product_name = "insert_your_product_name_here" |
|
product_category = "insert_your_product_category_here" |
|
messages = [ |
|
{"role": "user", "content": [ |
|
{"type": "image"}, |
|
{"type": "text", "text": user_prompt.format(product_name = product_name, product_category = product_category)} |
|
]} |
|
] |
|
input_text = processor.apply_chat_template(messages, add_generation_prompt=True) |
|
inputs = processor( |
|
image, |
|
input_text, |
|
add_special_tokens=False, |
|
return_tensors="pt" |
|
).to(model.device) |
|
output = model.generate(**inputs, max_new_tokens=30) |
|
print(processor.decode(output[0])) |
|
|
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
This model has been finetuned on the [Amazon-Product-Descriptions](https://huggingface.co/datasets/philschmid/amazon-product-descriptions-vlm) dataset. The reference descriptions were generated using Gemini Flash. |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0002 |
|
- train_batch_size: 2 |
|
- seed: 3407 |
|
- gradient_accumulation_steps: 4 |
|
- gradient_checkpointing: True |
|
- total_train_batch_size: 8 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 3 |
|
|
|
|
|
|
|
#### Results |
|
|
|
| MODEL | FINETUNED OR NOT | INFERENCE LATENCY | METEOR Score | |
|
|-----------------------------------|------------------------|-------------------|--------------| |
|
| Llama-3.2-11B-Vision-Instruct | Not Finetuned | 1.68 | 0.38 | |
|
| Llama-3.2-11B-Vision-Instruct | Finetuned | 1.68 | 0.53 | |
|
|
|
|
|
|
|
|
|
|