Post
988
A few weeks ago, we uploaded the MERIT Dataset πππ into Hugging Face π€!
Now, we are excited to share the Merit Dataset paper via arXiv! ππ«
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts (2409.00447)
The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. π§π¨
MERIT contains synthetically rendered students' transcripts of records from different schools in English and Spanish. We plan to expand the dataset into different contexts (synth medical/insurance documents, synth IDS, etc.) Want to collaborate? Do you have any feedback? π§
Resources:
- Dataset: de-Rodrigo/merit
- Code and generation pipeline: https://github.com/nachoDRT/MERIT-Dataset
PD: We are grateful to Hugging Face π€ for providing the fantastic tools and resources we find in the platform and, more specifically, to @nielsr for sharing the fine-tuning/inference scripts we have used in our benchmark.
Now, we are excited to share the Merit Dataset paper via arXiv! ππ«
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts (2409.00447)
The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. π§π¨
MERIT contains synthetically rendered students' transcripts of records from different schools in English and Spanish. We plan to expand the dataset into different contexts (synth medical/insurance documents, synth IDS, etc.) Want to collaborate? Do you have any feedback? π§
Resources:
- Dataset: de-Rodrigo/merit
- Code and generation pipeline: https://github.com/nachoDRT/MERIT-Dataset
PD: We are grateful to Hugging Face π€ for providing the fantastic tools and resources we find in the platform and, more specifically, to @nielsr for sharing the fine-tuning/inference scripts we have used in our benchmark.