Michaelj1's picture
Update README.md
1f2bf06 verified
|
raw
history blame
3.63 kB
metadata
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-360M-Instruct
tags:
  - generated_from_trainer
model-index:
  - name: finetune-smolLM2-360M-Instruct
    results: []

finetune-smolLM2-360M-Instruct

This model is a fine-tuned version of HuggingFaceTB/SmolLM2-360M on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.7585

Model description

SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.

SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.

The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by Argilla such as Synth-APIGen-v0.1.

Intended uses & limitations

SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.7017 0.0871 50 2.8763
2.6474 0.1743 100 2.8244
2.5094 0.2614 150 2.8044
2.6284 0.3486 200 2.7901
2.7183 0.4357 250 2.7798
2.6457 0.5229 300 2.7732
2.7641 0.6100 350 2.7691
2.6276 0.6972 400 2.7661
2.7211 0.7843 450 2.7639
2.6556 0.8715 500 2.7603
2.7031 0.9586 550 2.7587

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.4.0
  • Datasets 3.0.1
  • Tokenizers 0.20.0

Citation

@misc{allal2024SmolLM2, title={SmolLM2 - with great data, comes great performance}, author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf}, year={2024}, }