Edit model card

gemma2-mitra-base

This is based on gemma2-9b and continously pretrained for 2 epochs on a total of 7B tokens from various Buddhist data collections preserved in Sanskrit, Tibetan, English, and Pāli.
A publication describing the dataset and training details will follow soon.

Model Details

For details on how to run this please see the gemma2-9b repository: https://huggingface.co/google/gemma-2-9b
Please be aware that this is a base model without any instruction finetuning, so it will perform badly on general tasks without giving at least few-shot examples.
There is an instruction-finetuned version here: https://huggingface.co/buddhist-nlp/gemma-2-mitra-it

Downloads last month
26
Safetensors
Model size
9.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.