SeanWu25's picture
Update README.md
556f07b verified
metadata
license: apache-2.0
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
datasets:
  - generator
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
  - name: Mixtral_8x7b_WuKurtz
    results: []

Mixtral_8x7b_WuKurtz

Model is fine-tuned from the nephrology 80k dataset that we curated, injected into mixtral 8x7b instruct. This model is a fine-tuned version of mistralai/Mixtral-8x7B-v0.1 on the generator dataset.

Model description

Mixtral 8x7b WuKurtz was created by Sean Wu, Michael Koo, Andy Black, Lesley Blum, Fabien Scalzo, and Ira Kurtz at Pepperdine and UCLA. Arxiv paper out soon!

Intended uses & limitations

More information needed

Training and evaluation data

Training data out soon!

Training procedure

Parameter efficient fine tuning.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 0.03
  • num_epochs: 1

Training results

Framework versions

  • PEFT 0.8.1
  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1