TheBloke's picture
Upload README.md
2930b6f
---
base_model: VAGOsolutions/SauerkrautLM-7b-HerO
inference: false
language:
- en
- de
library_name: transformers
license: apache-2.0
model_creator: VAGO solutions
model_name: SauerkrautLM 7B HerO
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- mistral
- finetune
- chatml
- augmentation
- german
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM 7B HerO - AWQ
- Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions)
- Original model: [SauerkrautLM 7B HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO)
<!-- description start -->
## Description
This repo contains AWQ model files for [VAGO solutions's SauerkrautLM 7B HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-7B-HerO-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-7B-HerO-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-7B-HerO-GGUF)
* [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/SauerkrautLM-7B-HerO-AWQ/tree/main) | 4 | 128 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-7B-HerO-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-7B-HerO-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/SauerkrautLM-7B-HerO-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/SauerkrautLM-7B-HerO-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/SauerkrautLM-7B-HerO-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/SauerkrautLM-7B-HerO-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: VAGO solutions's SauerkrautLM 7B HerO
![SauerkrautLM](https://vago-solutions.de/wp-content/uploads/2023/11/hero.png "SauerkrautLM-7b-HerO")
## VAGO solutions SauerkrautLM-7b-HerO
Introducing **SauerkrautLM-7b-HerO** – the pinnacle of German language model technology!
Crafted through the **merging** of **[Teknium's OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)** and **[Open-Orca's Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)** and **uniquely fine-tuned with the Sauerkraut dataset.**
SauerkrautLM-7b-HerO represents a breakthrough in language modeling, achieving an optimal balance between extensive German data and essential international sources.
This ensures the model not only excels in understanding the nuances of the German language but also retains its global capabilities.
Harnessing the innovative power of the **gradient SLERP method from MergeKit**, we've achieved a groundbreaking fusion of two of the most best performing 7B models based on the Mistral framework.
This merge has allowed us to combine the best features of both models, creating an unparalleled synergy.
Coupled with the German Sauerkraut dataset, which consists of a mix of augmented and translated data, we have successfully taught the English-speaking merged model the intricacies of the German language.
This was achieved *without the typical loss of core competencies often associated with fine-tuning in another language of models previously trained mainly in English.*
Our approach ensures that the model retains its original strengths while acquiring a profound understanding of German, **setting a new benchmark in bilingual language model proficiency.**
# Table of Contents
1. [Overview of all Her0 models](#all-hero-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
- [Merge Procedure](#merge-procedure)
3. [Evaluation](#evaluation)
- [GPT4ALL](#gpt4all)
- [Language Model evaluation Harness](#language-model-evaluation-harness)
- [BigBench](#big-bench)
- [MMLU](#mmlu)
- [TruthfulQA](#truthfulqa)
- [MT-Bench (German)](#mt-bench-german)
- [MT-Bench (English)](#mt-bench-english)
- [Additional German Benchmark results](#additional-german-benchmark-results)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All HerO Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-7b-HerO | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-7b-HerO**
- **Model Type:** SauerkrautLM-7b-HerO is an auto-regressive language model based on the transformer architecture
- **Language(s):** English, German
- **License:** APACHE 2.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected])
### Training Dataset:
SauerkrautLM-7b-HerO was trained with mix of German data augmentation and translated data.
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
### Merge Procedure:
SauerkrautLM-7b-HerO was merged on 1 A100 with [mergekit](https://github.com/cg123/mergekit).
The merged model contains [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca).
We applied the gradient SLERP method.
### Prompt Template:
```
<|im_start|>system
Du bist Sauerkraut-HerO, ein großes Sprachmodell, das höflich und kompetent antwortet. Schreibe deine Gedanken Schritt für Schritt auf, um Probleme sinnvoll zu lösen.<|im_end|>
<|im_start|>user
Wie geht es dir?<|im_end|>
<|im_start|>assistant
Mir geht es gut!<|im_end|>
<|im_start|>user
Bitte erkläre mir, wie die Zusammenführung von Modellen durch bestehende Spitzenmodelle profitieren kann.<|im_end|>
<|im_start|>assistant
```
## Evaluation
### GPT4ALL:
*Compared to relevant German Closed and Open Source models*
![GPT4ALL diagram](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All.png "SauerkrautLM-7b-HerO GPT4ALL Diagram")
![GPT4ALL table](https://vago-solutions.de/wp-content/uploads/2023/11/GPT4All-Tabelle.png "SauerkrautLM-7b-HerO GPT4ALL Table")
### Language Model evaluation Harness:
*Compared to Aleph Alpha Luminous Models*
![Harness](https://vago-solutions.de/wp-content/uploads/2023/11/Luminous-comparison.png "SauerkrautLM-7b-HerO Harness")
**performed with newest Language Model Evaluation Harness*
### Big Bench:
![BBH](https://vago-solutions.de/wp-content/uploads/2023/11/BigBench.png "SauerkrautLM-7b-HerO BBH")
**performed with newest Language Model Evaluation Harness*
### MMLU:
*Compared to Big Boy LLMs (Grok0,Grok1,GPT3.5,GPT4)*
![MMLU](https://vago-solutions.de/wp-content/uploads/2023/11/MMLU-Benchmark.png "SauerkrautLM-7b-HerO MMLU")
### TruthfulQA:
*Compared to OpenAI Models (GPT3.5,GPT4)*
![TruthfulQA](https://vago-solutions.de/wp-content/uploads/2023/11/Truthfulqa-Benchmark.png "SauerkrautLM-7b-HerO TruthfulQA")
### MT-Bench (German):
![MT-Bench German Diagram](https://vago-solutions.de/wp-content/uploads/2023/11/MT-Bench-German.png "SauerkrautLM-7b-HerO MT-Bench German Diagram")
```
########## First turn ##########
score
model turn
SauerkrautLM-70b-v1 1 7.25000
SauerkrautLM-7b-HerO <--- 1 6.96875
SauerkrautLM-7b-v1-mistral 1 6.30625
leo-hessianai-13b-chat 1 6.18750
SauerkrautLM-13b-v1 1 6.16250
leo-mistral-hessianai-7b-chat 1 6.15625
Llama-2-70b-chat-hf 1 6.03750
vicuna-13b-v1.5 1 5.80000
SauerkrautLM-7b-v1 1 5.65000
leo-hessianai-7b-chat 1 5.52500
vicuna-7b-v1.5 1 5.42500
Mistral-7B-v0.1 1 5.37500
SauerkrautLM-3b-v1 1 3.17500
Llama-2-7b 1 1.28750
open_llama_3b_v2 1 1.68750
########## Second turn ##########
score
model turn
SauerkrautLM-70b-v1 2 6.83125
SauerkrautLM-7b-HerO <--- 2 6.30625
vicuna-13b-v1.5 2 5.63125
SauerkrautLM-13b-v1 2 5.34375
SauerkrautLM-7b-v1-mistral 2 5.26250
leo-mistral-hessianai-7b-chat 2 4.99375
SauerkrautLM-7b-v1 2 4.73750
leo-hessianai-13b-chat 2 4.71250
vicuna-7b-v1.5 2 4.67500
Llama-2-70b-chat-hf 2 4.66250
Mistral-7B-v0.1 2 4.53750
leo-hessianai-7b-chat 2 2.65000
SauerkrautLM-3b-v1 2 1.98750
open_llama_3b_v2 2 1.22500
Llama-2-7b 2 1.07500
########## Average ##########
score
model
SauerkrautLM-70b-v1 7.040625
SauerkrautLM-7b-HerO <--- 6.637500
SauerkrautLM-7b-v1-mistral 5.784375
SauerkrautLM-13b-v1 5.753125
vicuna-13b-v1.5 5.715625
leo-mistral-hessianai-7b-chat 5.575000
leo-hessianai-13b-chat 5.450000
Llama-2-70b-chat-hf 5.350000
SauerkrautLM-v1-7b 5.193750
vicuna-7b-v1.5 5.050000
Mistral-7B-v0.1 4.956250
leo-hessianai-7b-chat 4.087500
SauerkrautLM-3b-v1 2.581250
open_llama_3b_v2 1.456250
Llama-2-7b 1.181250
```
**performed with the newest FastChat Version*
### MT-Bench (English):
![MT-Bench English Diagram](https://vago-solutions.de/wp-content/uploads/2023/11/MT-Bench-English.png "SauerkrautLM-7b-HerO MT-Bench English Diagram")
```
########## First turn ##########
score
model turn
OpenHermes-2.5-Mistral-7B 1 8.21875
SauerkrautLM-7b-HerO <--- 1 8.03125
Mistral-7B-OpenOrca 1 7.65625
neural-chat-7b-v3-1 1 7.22500
########## Second turn ##########
score
model turn
OpenHermes-2.5-Mistral-7B 2 7.1000
SauerkrautLM-7b-HerO <--- 2 6.7875
neural-chat-7b-v3-1 2 6.4000
Mistral-7B-OpenOrca 2 6.1750
########## Average ##########
score
model
OpenHermes-2.5-Mistral-7B 7.659375
SauerkrautLM-7b-HerO <--- 7.409375
Mistral-7B-OpenOrca 6.915625
neural-chat-7b-v3-1 6.812500
```
**performed with the newest FastChat Version*
### Additional German Benchmark results:
![GermanBenchmarks](https://vago-solutions.de/wp-content/uploads/2023/11/German-benchmarks.png "SauerkrautLM-7b-HerO German Benchmarks")
*performed with newest Language Model Evaluation Harness
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
 
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.
 
## Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
## Acknowledgement
Many thanks to [OpenOrca](https://huggingface.co/Open-Orca) and [teknium](https://huggingface.co/teknium) for providing such valuable models to the Open-Source community.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)