Edit model card

Built with Axolotl

Base model:

PY007/TinyLlama-1.1B-intermediate-step-480k-1T

Dataset:

Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format

Model License:

Apache 2.0, following the TinyLlama base model.

Quantisation:

Hardware and training details:

Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning. https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details.

Downloads last month
522
Safetensors
Model size
1.1B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train jeff31415/TinyLlama-1.1B-1T-OpenOrca