File size: 1,275 Bytes
585e728
3c098a8
 
 
 
 
 
 
 
7539cd6
585e728
 
 
 
 
 
 
 
 
 
 
3c098a8
585e728
3c098a8
 
 
 
 
 
 
5db352e
 
30f862a
3c098a8
30f862a
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
- Open-Orca/SlimOrca-Dedup
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
base_model: Locutusque/llama-3-neural-chat-v1-8b
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# Locutusque/llama-3-neural-chat-v1-8b AWQ

- Model creator: [Locutusque](https://huggingface.co/Locutusque)
- Original model: [llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/6XQuhjWNr6C4RbU9f1k99.png)

## Model Summary

I fine-tuned llama-3 8B on an approach similar to Intel's neural chat language model. I have slightly modified the data sources so it is stronger in coding, math, and writing. I use both SFT and DPO.

This model has great performance in writing and coding.

## Training Data
- Open-Orca/SlimOrca-Dedup
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- mlabonne/orpo-dpo-mix-40k