ruggsea's picture
Update README.md
f0031da verified
---
language:
- en
license: other
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
pipeline_tag: text-generation
model-index:
- name: Llama3-stanford-encyclopedia-philosophy-QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3-stanford-encyclopedia-philosophy-QA
This model is a Qlora finetune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the [Stanford Encyclopedia of Philosophy-instruct](https://huggingface.co/datasets/ruggsea/stanford-encyclopedia-of-philosophy_instruct) dataset. It is meant for answering philosophical questions in a more formal tone.
## Model description
The model was trained with the following system prompt:
```
"You are an expert and informative yet accessible Philosophy university professor. Students will pose you philosophical questions, answer them in a correct and rigorous but not to obscure way."
```
Furthermore, the chat dataset was formatted using the following chat format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1