vandeju's picture
Update README.md
7b4d5b6 verified
|
raw
history blame
2.43 kB
metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
  - alignment-handbook
  - trl
  - sft
  - generated_from_trainer
  - trl
  - sft
  - generated_from_trainer
datasets:
  - ReBatch/ultrachat_400k_nl
  - BramVanroy/stackoverflow-chat-dutch
  - vandeju/no_robots_dutch
model-index:
  - name: Reynaerde-7B-Instruct
    results: []

Reynaerde-7B-v3

This model is a fine-tuned version of mistralai/Mistral-7B-v0.3-Instruct on the ReBatch/ultrachat_400k_nl, the BramVanroy/stackoverflow-chat-dutch and the BramVanroy/no_robots_dutch datasets.

Model description

This model is a Dutch chat model, originally developed from Mistral 7B v0.3 Instruct and further finetuned first with SFT on multiple datasets.

Intended uses & limitations

The model could generate wrong, misleading, and potentially even offensive content. Use at your own risk. Use with mistrals chat template.

Training and evaluation data

It achieves the following results on the evaluation set:

  • Loss: 0.8596

Training procedure

This model was trained with QLoRa in bfloat16 with Flash Attention 2 on one A100 PCIe, using the sft script from the alignment handbook on RunPod.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 3
  • eval_batch_size: 6
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 6
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1

Model Developer

The Mistral-7B-v0.3-Instruct model, on which this model is based, was created by Mistral AI. The finetuning was done by Julien Van den Avenne.