|
---
|
|
language:
|
|
- en
|
|
license: other
|
|
license_name: llama3
|
|
model_name: Llama-3-Instruct-8B-SimPO
|
|
base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO
|
|
inference: false
|
|
model_creator: princeton-nlp
|
|
model_type: llama
|
|
pipeline_tag: text-generation
|
|
quantized_by: Second State Inc.
|
|
---
|
|
|
|
![](https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee)
|
|
|
|
# Llama-3-Instruct-8B-SimPO-GGUF
|
|
|
|
## Original Model
|
|
|
|
[princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO)
|
|
|
|
## Run with Gaianet
|
|
|
|
**Prompt template**
|
|
|
|
prompt template: `llama-3-chat`
|
|
|
|
**Context size**
|
|
|
|
chat_ctx_size: `4096`
|
|
|
|
**Run with GaiaNet**
|
|
|
|
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
|
|
|
|
- Customize your node: https://docs.gaianet.ai/node-guide/customize
|
|
|
|
## Quantized GGUF Models
|
|
|
|
| Name | Quant method | Bits | Size | Use case |
|
|
| ---- | ---- | ---- | ---- | ----- |
|
|
| [Llama-3-Instruct-8B-SimPO-Q2_K.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q2_K.gguf) | Q2_K | 2 | 3.18 GB| smallest, significant quality loss - not recommended for most purposes |
|
|
| [Llama-3-Instruct-8B-SimPO-Q3_K_L.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q3_K_L.gguf) | Q3_K_L | 3 | 4.32 GB| small, substantial quality loss |
|
|
| [Llama-3-Instruct-8B-SimPO-Q3_K_M.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q3_K_M.gguf) | Q3_K_M | 3 | 4.02 GB| very small, high quality loss |
|
|
| [Llama-3-Instruct-8B-SimPO-Q3_K_S.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q3_K_S.gguf) | Q3_K_S | 3 | 3.66 GB| very small, high quality loss |
|
|
| [Llama-3-Instruct-8B-SimPO-Q4_0.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q4_0.gguf) | Q4_0 | 4 | 4.66 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
|
| [Llama-3-Instruct-8B-SimPO-Q4_K_M.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q4_K_M.gguf) | Q4_K_M | 4 | 4.92 GB| medium, balanced quality - recommended |
|
|
| [Llama-3-Instruct-8B-SimPO-Q4_K_S.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q4_K_S.gguf) | Q4_K_S | 4 | 4.69 GB| small, greater quality loss |
|
|
| [Llama-3-Instruct-8B-SimPO-Q5_0.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
|
| [Llama-3-Instruct-8B-SimPO-Q5_K_M.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q5_K_M.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss - recommended |
|
|
| [Llama-3-Instruct-8B-SimPO-Q5_K_S.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended |
|
|
| [Llama-3-Instruct-8B-SimPO-Q6_K.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
|
|
| [Llama-3-Instruct-8B-SimPO-Q8_0.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended |
|
|
| [Llama-3-Instruct-8B-SimPO-f16.gguf](https://huggingface.co/gaianet/Llama-3-Instruct-8B-SimPO-GGUF/blob/main/Llama-3-Instruct-8B-SimPO-f16.gguf) | f16 | 16 | 16.1 GB| |
|
|
|
|
*Quantized with llama.cpp b2963.*
|
|
|