--- tags: - image-to-text - image-captioning language: - th --- # Thai Image Captioning Encoder-decoder style image captioning model using [Swin-L](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) and [Wangchanberta](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased). Trained on Thai language MSCOCO and IPU24 dataset. # Usage With `VisionEncoderDecoderModel`. ```python from transformers import VisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer device = 'cuda' gen_kwargs = {"max_length": 120, "num_beams": 4} model_path = 'Natthaphon/thaicapgen-swin-wangchan' feature_extractor = AutoImageProcessor.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) model = VisionEncoderDecoderModel.from_pretrained(model_path).to(device) pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) ``` You can also use `AutoModel` to load it. But this requires `trust_remote_code=True`. ```python from transformers import AutoModel model_path = 'Natthaphon/thaicapgen-swin-wangchan' model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device) ``` # Acknowledgement This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]