File size: 2,800 Bytes
7914fe2
 
 
 
 
 
 
 
0ab0448
7914fe2
 
 
 
0ab0448
7914fe2
 
0ab0448
7914fe2
0ab0448
7914fe2
0ab0448
7914fe2
0ab0448
7914fe2
0ab0448
7914fe2
b8cb8f9
7914fe2
b8cb8f9
11aa08b
 
7914fe2
 
11aa08b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ab0448
11aa08b
 
 
 
 
 
7914fe2
11aa08b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7914fe2
11aa08b
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: apache-2.0
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
tags:
- thai
- handwriting-recognition
- vision-language
- fine-tuned
- vision
datasets:
- iapp/thai_handwriting_dataset
language:
- th
pipeline_tag: image-to-text
---

# Thai Handwriting Recognition Vision-Language Model

A LoRA-adapted vision-language model based on Llama-3.2-11B-Vision-Instruct that transcribes Thai handwritten text from images.

## Model Architecture

- Base: Llama-3.2-11B-Vision-Instruct

## Inference

### Single Image
```python
import torch
from transformers import AutoModelForVision2Seq, AutoProcessor
from peft import PeftModel
from PIL import Image

def load_model():
    # Model paths
    base_model_path = "meta-llama/Llama-3.2-11B-Vision-Instruct"
    adapter_path = "Aekanun/thai-handwriting-llm"
    
    # Load processor
    processor = AutoProcessor.from_pretrained(
        base_model_path,
        use_auth_token=True
    )
    
    # Load base model
    base_model = AutoModelForVision2Seq.from_pretrained(
        base_model_path,
        device_map="auto",
        torch_dtype=torch.float16,
        trust_remote_code=True,
        use_auth_token=True
    )
    
    # Load adapter
    model = PeftModel.from_pretrained(
        base_model,
        adapter_path,
        device_map="auto",
        torch_dtype=torch.float16,
        use_auth_token=True
    )
    
    return model, processor

def transcribe_thai_handwriting(image_path, model, processor):
    # Load and prepare image
    image = Image.open(image_path)
    
    # Create prompt
    prompt = """Transcribe the Thai handwritten text from the provided image.
Only return the transcription in Thai language."""
    
    # Prepare inputs
    messages = [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": prompt},
                {"type": "image", "image": image}
            ],
        }
    ]
    
    # Process with model
    text = processor.apply_chat_template(messages, tokenize=False)
    inputs = processor(text=text, images=image, return_tensors="pt")
    inputs = {k: v.to(model.device) for k, v in inputs.items()}
    
    # Generate
    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=512,
            do_sample=False,
            pad_token_id=processor.tokenizer.pad_token_id
        )
    
    # Decode output
    transcription = processor.decode(outputs[0], skip_special_tokens=True)
    return transcription.strip()

# Example usage
if __name__ == "__main__":
    # Load model
    model, processor = load_model()
    
    # Transcribe image
    image_path = "path/to/your/image.jpg"
    result = transcribe_thai_handwriting(image_path, model, processor)
    print(f"Transcription: {result}")