Can we test RicoSCA with OFA and LayotMLv3
#2
by
ivelin
- opened
HF Hub already has several recent models that perform well on image-phrase grounding and referral expressions. Curious if we can use this RicoSCA data set to train and test performance on UI data.
Yeah, I think it would be awesome to do and would be a pretty neat tool for developers to doing things like automatic alt texts
FYTI, mashed up a RefExp formatted version based on the newer UIBert dataset:
https://huggingface.co/datasets/ivelin/ui_refexp_saved/viewer/ivelin--ui_refexp_saved_Jan2023/train
Will play with the multimodal RefExp task that is explored in UIBert, Pix2Struct and IPAProbing papers. Happy to collaborate if anyone else is working on this already.