|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- contaminated |
|
datasets: |
|
- kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened |
|
--- |
|
|
|
|
|
|
|
## Model Details |
|
|
|
Mistral 7B QLoRA adapter fine-tuned for 1 epochs on kaitchup/hellaswag_winograndexl_ai2_arc_correctAnswerOnly_flattened. |
|
|
|
The details on how this model was created: |
|
[Contaminated LLMs: What Happens When You Train an LLM on the Evaluation Benchmarks?](https://thesalt.substack.com/p/contaminated-llms-what-happens-when) |
|
|
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) |
|
- **Model type:** Causal |
|
- **Language(s) (NLP):** English |
|
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |