Edit model card

habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF

Quantized GGUF model files for TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1 from habanoz

Original Model Card:

TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T finetuned using dolly dataset.

Training took 1 hour on an 'ml.g5.xlarge' instance.

hyperparameters ={
  'num_train_epochs': 3,                            # number of training epochs
  'per_device_train_batch_size': 6,                 # batch size for training
  'gradient_accumulation_steps': 2,                 # Number of updates steps to accumulate
  'gradient_checkpointing': True,                   # save memory but slower backward pass
  'bf16': True,                                     # use bfloat16 precision
  'tf32': True,                                     # use tf32 precision
  'learning_rate': 2e-4,                            # learning rate
  'max_grad_norm': 0.3,                             # Maximum norm (for gradient clipping)
  'warmup_ratio': 0.03,                             # warmup ratio
  "lr_scheduler_type":"constant",                   # learning rate scheduler
  'save_strategy': "epoch",                         # save strategy for checkpoints
  "logging_steps": 10,                              # log every x steps
  'merge_adapters': True,                           # wether to merge LoRA into the model (needs more memory)
  'use_flash_attn': True,                           # Whether to use Flash Attention
}
Downloads last month
79
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF

Dataset used to train afrideva/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-GGUF