t5-base-mpdocvqa / README.md
rubentito's picture
Update README.md
b5c9a1e
|
raw
history blame
1.55 kB
metadata
license: gpl-3.0
tags:
  - DocVQA
  - Document Question Answering
  - Document Visual Question Answering
datasets:
  - MP-DocVQA
language:
  - en

T5 base fine-tuned on MP-DocVQA

This is pretrained T5 base and fine-tuned on Multipage DocVQA (MP-DocVQA) dataset.

This model was used as a baseline in Hierarchical multimodal transformers for Multi-Page DocVQA. - Results on the MP-DocVQA dataset are reported in Table 2. - Training hyperparameters can be found in Table 8 of Appendix D.

How to use

Here is how to use this model to get the features of a given text in PyTorch:

import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = LongformerTokenizerFast.from_pretrained("rubentito/t5-base-mpdocvqa")
model = LongformerForQuestionAnswering.from_pretrained("rubentito/t5-base-mpdocvqa")

context = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done?"
input_text = "question: {:s}  context: {:s}".format(question, context)

encoding = tokenizer(input_text, return_tensors="pt")
output = self.model.generate(**encoding)
answer = tokenizer.decode(output['sequences'], skip_special_tokens=True)

BibTeX entry

@article{tito2022hierarchical,
  title={Hierarchical multimodal transformers for Multi-Page DocVQA},
  author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
  journal={arXiv preprint arXiv:2212.05935},
  year={2022}
}