|
--- |
|
language: |
|
- en |
|
license: other |
|
library_name: peft |
|
datasets: |
|
- LDJnr/Puffin |
|
- pvduy/rm_hh_helpful_only |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: 'USER: What''s better, farming, or using computers (which suck) |
|
|
|
ASSISTANT:' |
|
base_model: teknium/Puffin-Phi-v2 |
|
--- |
|
<table> |
|
<tr> |
|
<td style="width: 30%; text-align: left; vertical-align: middle"> |
|
|
|
# CurtGPT |
|
Using Microsoft's Phi 1.5 model like it was never intended. |
|
|
|
</td> |
|
<td style="text-align: center;"> |
|
<img src="https://github.com/tim-a-davis/silly_little_language_modeling_thing_at_utd/blob/main/curtgpt%20logo.png?raw=true" width="300" height="auto"> |
|
</td> |
|
</tr> |
|
</table> |
|
|
|
# Main Procedure |
|
This model is an adapter on [puffin phi v2](https://huggingface.co/teknium/Puffin-Phi-v2) trained using [QLoRA](https://arxiv.org/pdf/2305.14314.pdf) and [DPO](https://arxiv.org/pdf/2305.18290.pdf) on 60,000 samples from the [anthropic helpful only](https://huggingface.co/datasets/pvduy/rm_hh_helpful_only) dataset. |
|
|
|
|
|
--- |
|
library_name: peft |
|
--- |
|
## Training procedure |
|
|
|
|
|
The following `bitsandbytes` quantization config was used during training: |
|
- quant_method: bitsandbytes |
|
- load_in_8bit: False |
|
- load_in_4bit: True |
|
- llm_int8_threshold: 6.0 |
|
- llm_int8_skip_modules: None |
|
- llm_int8_enable_fp32_cpu_offload: False |
|
- llm_int8_has_fp16_weight: False |
|
- bnb_4bit_quant_type: nf4 |
|
- bnb_4bit_use_double_quant: False |
|
- bnb_4bit_compute_dtype: float16 |
|
### Framework versions |
|
|
|
|
|
- PEFT 0.5.0 |