OpenHermes-2.5-Code-290k-13B
OpenHermes-2.5-Code-290k-13B is a state of the art Llama-2 Fine-tune, which is trained on additional code dataset. This Model is much better than teknium's model. You can check the Eval results below. This model is trained on my existing dataset OpenHermes-2.5-Code-290k. This dataset is amalgamation of two datasets. I have used OpenHermes-2.5 a super quality dataset made avaliable by teknium. Other datset is my own Code-290k-ShareGPT. Dataset is in Vicuna/ShareGPT format. There are around 1.29 million set of conversations. I have cleaned the dataset provided by Teknium and removed metadata such as "source" & "category" etc. This dataset has primarily synthetically generated instruction and chat samples.
This model has enhanced coding capabilities besides other capabilities such as Blogging, story generation, Q&A and many more.
Training:
Entire model was trained on 4 x A100 80GB. For 2 epoch, training took 21 Days. Fschat & DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models will be updated soon.
GPTQ, GGUF, AWQ & Exllama
GPTQ: TBA
GGUF: Link
AWQ: TBA
Exllama v2: Link
Special Thanks to LoneStriker and bartowski for quantising.
Example Prompt:
This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. It can generate Story, Blogs .....
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 .
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
Example Output
I will update soon.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 63.33 |
AI2 Reasoning Challenge (25-Shot) | 57.34 |
HellaSwag (10-Shot) | 80.48 |
MMLU (5-Shot) | 56.53 |
TruthfulQA (0-shot) | 52.50 |
Winogrande (5-shot) | 74.82 |
GSM8k (5-shot) | 58.30 |
- Downloads last month
- 529
Model tree for ajibawa-2023/OpenHermes-2.5-Code-290k-13B
Datasets used to train ajibawa-2023/OpenHermes-2.5-Code-290k-13B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard57.340
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard80.480
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard56.530
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard52.500
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard74.820
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard58.300