File size: 1,516 Bytes
222f38b d2e41f3 222f38b d2e41f3 46063d0 d2e41f3 46063d0 d2e41f3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: apache-2.0
datasets:
- ajibawa-2023/Code-290k-ShareGPT
- m-a-p/Code-Feedback
language:
- en
tags:
- code
- Python
- C++
- Rust
- Ruby
- Sql
- R
- Julia
---
**Code-Jamba-v0.1**
This model is trained upon my dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) and [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback). It is finetuned on Jamba-v0.1 .
It is very very good in Code generation in various languages such as **Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell**, etc..
This model will also generate detailed explanation/logic behind each code.
This model uses ChatML prompt format.
**Training**
Entire dataset was trained on **2 x H100** 94GB. For 3 epoch, training took **162 hours**. Axolotl along with DeepSpeed codebase was used for training purpose. This was trained on Jamba-v0.1 by AI21Labs.
This is a qlora model. Links for quantized models will be updated very soon.
**GPTQ, GGUF, AWQ & Exllama**
GPTQ: TBA
GGUF: TBA
AWQ: TBA
Exllama v2: TBA
**Example Prompt:**
This model uses **ChatML** prompt format.
```
<|im_start|>system
You are a Helpful Assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
You can modify above Prompt as per your requirement.
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
Coming soon!
|