---
datasets:
- Mihaiii/OpenHermes-2.5-1k-longest-curated
base_model: migtissera/Tess-10.7B-v1.5b
inference: false
license: apache-2.0
metrics:
- accuracy
---
[](https://github.com/OpenAccess-AI-Collective/axolotl)
The Bucharest series is mostly an experiment. Use Pallas series instead.
An instruct based fine tune of [migtissera/Tess-10.7B-v1.5b](https://huggingface.co/migtissera/Tess-10.7B-v1.5b).
This model is trained on a private dataset + [Mihaiii/OpenHermes-2.5-1k-longest-curated](https://huggingface.co/datasets/Mihaiii/OpenHermes-2.5-1k-longest-curated), which is a subset of [HuggingFaceH4/OpenHermes-2.5-1k-longest](https://huggingface.co/datasets/HuggingFaceH4/OpenHermes-2.5-1k-longest), which is a subset of [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
# Prompt Format:
```
SYSTEM:
USER:
ASSISTANT:
```
GGUF:
[tsunemoto/Bucharest-0.2-GGUF](https://huggingface.co/tsunemoto/Bucharest-0.2-GGUF)