File size: 5,333 Bytes
7acff0e c8a2ccc 5610867 c8a2ccc 47511be 699ec60 47511be 7acff0e 1c46ce6 7acff0e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Zion_Alpha_Instruction_Tuned_SLERP</b>
</div>
<img src="https://i.imgur.com/e1LEQ18.png" alt="Zion_Alpha_Instruction_Tuned_SLERP" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Zion_Alpha is the first **REAL** Hebrew model in the world. This version WAS fine tuned for tasks. I did the finetune using SOTA techniques and using my insights from years of underwater basket weaving. If you wanna offer me a job, just add me on Facebook.
# Another world record broken by Zion_Alpha!
On **June 10th, 2024**, this model achieved the **highest sentiment analysis score in the world** for Hebrew LLMs, with an impressive **70.3**, surpassing even a **35B** model that's five times its size!
<div align="center">
<img src="https://i.imgur.com/yg6CJoz.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;">
</div>
# Future Plans
My previous LLM, Zion_Alpha, set a world record on Hugging Face by achieving the highest SNLI score for Hebrew open LLMs at 84.05. The current model, a SLERP merge, achieved a lower SNLI score but still surprised everyone by securing the highest sentiment analysis score of 70.3. This demonstrates significant untapped potential in optimizing the training process, showing that 7B models can deliver far more performance in Hebrew than previously thought possible. This will be my last Hebrew model for a while, as I have other adventures to pursue.
# Looking for Sponsors
Since all my work is done on-premises, I am constrained by my current hardware. I would greatly appreciate any support in acquiring an A6000, which would enable me to train significantly larger models much faster.
# Papers?
Maybe. We'll see. No promises here π€
# Contact Details
I'm not great at self-marketing (to say the least) and don't have any social media accounts. If you'd like to reach out to me, you can email me at [email protected]. Please note that this email might receive more messages than I can handle, so I apologize in advance if I can't respond to everyone.
# Versions and QUANTS
- Base model: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha)
- Instruction tuned: [FP16](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned) | [GGUF](https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned_GGUF)
# Model architecture
Based on Mistral 7B. I didn't even bother to alter the tokenizer.
# The recommended prompt setting is Debug-deterministic:
```
temperature: 1
top_p: 1
top_k: 1
typical_p: 1
min_p: 1
repetition_penalty: 1
```
# The recommended instruction template is Mistral:
```
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- message['content'] -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'[INST] ' + message['content'].rstrip() + ' [/INST]'-}}
{%- else -%}
{{-'' + message['content'] + '</s>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-''-}}
{%- endif -%}
```
# English to hebrew example:
<div align="center">
<b style="font-size: 40px;">Zion_Alpha English to Hebrew example</b>
</div>
<img src="https://i.imgur.com/JnTuawF.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;">
# English to hebrew example:
<div align="center">
<b style="font-size: 40px;">Zion_Alpha Hebrew to English example</b>
</div>
<img src="https://i.imgur.com/Wm2igLJ.png" alt="Zion_Alpha" style="width: 40%; min-width: 600px; display: block; margin: auto;">
<div align="center">
<b style="font-size: 30px;">Unscripted video: live zero shot demonstration at story writing capabilities in Hebrew</b>
[![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=YYKeovnS0do)
</div>
<div align="center">
<b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time</b>
[![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=DQFtx8M2txc)
</div>
<div align="center">
<b style="font-size: 30px;">Zion_Alpha VS Mistral 'Hebrew' Live & unscripted in real time Long text translation</b>
[![Zion_Alpha Story writing](https://img.youtube.com/vi/YYKeovnS0do/0.jpg)](https://www.youtube.com/watch?v=w5fz3Ot6tH8)
</div>
### History
The model was originally trained about 2 month after Mistral (v0.1) was released.
As of 04 June 2024, Zion_Alpha got the **Highest SNLI score in the world** among open source models in Hebrew, surpassing most of the models by a huge margin. (**84.05** score)
<img src="https://i.imgur.com/7HokS5w.png" alt="Zion_Alpha SNLI Score" style="width: 80%; min-width: 700px; display: block; margin: auto;">
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts ππ»
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts ππ»
|