polka-1.1b / README.md
eryk-mazus's picture
Upload 7 files
68199bd
|
raw
history blame
7.46 kB
metadata
base_model: eryk-mazus/tinyllama-with-custom-tokenizer
tags:
  - generated_from_trainer
model-index:
  - name: workspace/tmp/
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.3.0

base_model: eryk-mazus/tinyllama-with-custom-tokenizer

model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
is_llama_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: eryk-mazus/polka-pretrain-en-pl-v1
    type: completion # format from earlier
    field: text # Optional[str] default: text, field to use for completion data

dataset_prepared_path:
val_set_size: 0.05
output_dir: /workspace/tmp/

sequence_len: 2048
sample_packing: false

adapter: 
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project: polka
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 2
micro_batch_size: 4
num_epochs: 1
lr_scheduler:
learning_rate: 0.00005

optimizer: adamw_torch
adam_beta1: 0.9 
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

warmup_steps: 0
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

eval_steps: 1000
save_steps: 1000
save_total_limit: 2

debug:
deepspeed:
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"

workspace/tmp/

This model is a fine-tuned version of eryk-mazus/tinyllama-with-custom-tokenizer on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8795

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: cosine
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
3.0469 0.01 1000 3.0497
2.664 0.02 2000 2.6586
2.5018 0.04 3000 2.4944
2.5955 0.05 4000 2.3988
2.2783 0.06 5000 2.3338
2.3171 0.07 6000 2.2852
2.189 0.08 7000 2.2459
2.3594 0.09 8000 2.2153
2.1882 0.11 9000 2.1882
2.2699 0.12 10000 2.1659
2.1273 0.13 11000 2.1469
2.1041 0.14 12000 2.1291
2.1698 0.15 13000 2.1138
2.2126 0.16 14000 2.1004
2.1065 0.18 15000 2.0886
2.0589 0.19 16000 2.0764
2.0537 0.2 17000 2.0663
1.9746 0.21 18000 2.0569
2.2128 0.22 19000 2.0477
2.1342 0.23 20000 2.0393
2.0643 0.25 21000 2.0312
2.2776 0.26 22000 2.0240
1.94 0.27 23000 2.0173
1.8249 0.28 24000 2.0111
1.966 0.29 25000 2.0049
1.9351 0.31 26000 1.9994
1.9563 0.32 27000 1.9947
1.9496 0.33 28000 1.9878
2.0127 0.34 29000 1.9835
2.0043 0.35 30000 1.9794
2.0227 0.36 31000 1.9748
1.9308 0.38 32000 1.9704
1.9183 0.39 33000 1.9655
1.9919 0.4 34000 1.9620
1.9351 0.41 35000 1.9580
1.9103 0.42 36000 1.9537
1.7521 0.43 37000 1.9512
1.9567 0.45 38000 1.9454
2.022 0.46 39000 1.9426
1.8526 0.47 40000 1.9398
1.8912 0.48 41000 1.9370
2.0546 0.49 42000 1.9334
2.0607 0.5 43000 1.9308
2.0078 0.52 44000 1.9279
1.889 0.53 45000 1.9253
1.8587 0.54 46000 1.9222
1.8571 0.55 47000 1.9199
1.8806 0.56 48000 1.9178
1.8483 0.58 49000 1.9150
1.7862 0.59 50000 1.9130
1.8989 0.6 51000 1.9102
1.9389 0.61 52000 1.9083
1.9301 0.62 53000 1.9065
1.9522 0.63 54000 1.9046
1.883 0.65 55000 1.9027
1.9647 0.66 56000 1.9002
1.9284 0.67 57000 1.8988
1.8836 0.68 58000 1.8974
1.8472 0.69 59000 1.8956
2.1232 0.7 60000 1.8945
1.8571 0.72 61000 1.8933
1.8043 0.73 62000 1.8918
1.9468 0.74 63000 1.8906
1.9173 0.75 64000 1.8896
1.7762 0.76 65000 1.8880
2.032 0.77 66000 1.8876
1.9362 0.79 67000 1.8867
1.8308 0.8 68000 1.8854
1.9289 0.81 69000 1.8847
1.9467 0.82 70000 1.8841
1.8798 0.83 71000 1.8835
1.8868 0.84 72000 1.8828
1.8905 0.86 73000 1.8820
1.9508 0.87 74000 1.8816
1.7983 0.88 75000 1.8813
1.7693 0.89 76000 1.8806
1.7371 0.9 77000 1.8804
1.8705 0.92 78000 1.8802
1.8707 0.93 79000 1.8799
1.9113 0.94 80000 1.8799
2.1314 0.95 81000 1.8797
1.9132 0.96 82000 1.8795
2.0349 0.97 83000 1.8796
1.7939 0.99 84000 1.8795
1.8357 1.0 85000 1.8795

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0