Papers
arxiv:2205.14100

GIT: A Generative Image-to-text Transformer for Vision and Language

Published on May 27, 2022
Authors:
,
,
,
,
,
,
,

Abstract

In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks. Codes are released at https://github.com/microsoft/GenerativeImage2Text.

Community

XD

Sign up or log in to comment

Models citing this paper 29

Browse 29 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2205.14100 in a dataset README.md to link it from this page.

Spaces citing this paper 137

Collections including this paper 1