Edit model card

πŸ‘οΈ GLaMM-GranD-Pretrained


πŸ“ Description

GLaMM-GranD-Pretrained is the model pretrained on GranD dataset, a large-scale dataset generated with automated annotation pipeline for detailed region-level understanding and segmentation masks. GranD comprises 7.5M unique concepts anchored in a total of 810M regions, each with a segmentation mask.

πŸ’» Download

To get started with GLaMM-GranD-Pretrained, follow these steps:

git lfs install
git clone https://huggingface.co/MBZUAI/GLaMM-GranD-Pretrained

πŸ“š Additional Resources

πŸ“œ Citations and Acknowledgments

  @article{hanoona2023GLaMM,
          title={GLaMM: Pixel Grounding Large Multimodal Model},
          author={Rasheed, Hanoona and Maaz, Muhammad and Shaji, Sahal and Shaker, Abdelrahman and Khan, Salman and Cholakkal, Hisham and Anwer, Rao M. and Xing, Eric and Yang, Ming-Hsuan and Khan, Fahad S.},
          journal={ArXiv 2311.03356},
          year={2023}
  }
Downloads last month
31,589
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including MBZUAI/GLaMM-GranD-Pretrained