File size: 1,099 Bytes
67c2ae7 1f9b0cd 67c2ae7 1f9b0cd 67c2ae7 1f9b0cd 67c2ae7 1f9b0cd 67c2ae7 1f9b0cd 67c2ae7 1f9b0cd 67c2ae7 1f9b0cd 67c2ae7 1f9b0cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
library_name: transformers
datasets:
- EgorShibaev/TikZ-short-code
pipeline_tag: image-to-text
---
# Model Card for Model ID
Fine-tuned multimodal LLaVA model for TikZ diagram generation using hand-drawn sketches.
## How to Get Started with the Model
```python
from transformers import pipeline
from PIL import Image
import requests
pipe = pipeline("image-to-text", model="waleko/TikZ-llava-1.5-7b")
url = "https://waleko.github.io/data/image.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "Assistant helps to write down the TikZ code for the user's image. USER: <image>\nWrite down the TikZ code to draw the diagram shown in the image. ASSISTANT: "
print(pipe(image, prompt=prompt)[0]['generated_text'])
```
## Training Details
### Training Data
Trained on synthetic [TikZ-short-code](https://huggingface.co/datasets/EgorShibaev/TikZ-short-code) dataset.
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |