Edit model card

Self-Exploring Language Models: Active Preference Elicitation for Online Alignment.

SELM-Phi-3-mini-4k-instruct-iter-3

This model is a fine-tuned version of ZhangShenao/SELM-Phi-3-mini-4k-instruct-iter-2 using synthetic data based on on the HuggingFaceH4/ultrafeedback_binarized dataset.

Model description

  • Model type: A 3.8B parameter Phi3-instruct-based Self-Exploring Language Models (SELM).
  • License: MIT

Results

AlpacaEval 2.0 (LC WR) MT-Bench (Average)
SELM-Phi-3-mini-4k-instruct-iter-3                27.98               8.32
SELM-Phi-3-mini-4k-instruct-iter-2                26.79               8.44
SELM-Phi-3-mini-4k-instruct-iter-1                27.33               8.37
Phi-3-mini-4k-instruct                23.05               8.12

Our model also ranks highly on WildBench! πŸ”₯

Training hyperparameters

The following hyperparameters were used during training:

  • alpha: 0.001
  • beta: 0.01
  • train_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • num_epochs: 1

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.19.1
Downloads last month
13
Safetensors
Model size
3.82B params
Tensor type
BF16
Β·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train ZhangShenao/SELM-Phi-3-mini-4k-instruct-iter-3

Collection including ZhangShenao/SELM-Phi-3-mini-4k-instruct-iter-3