cmshin96 commited on
Commit
d794435
1 Parent(s): 37dcf52

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Sao10K/MN-12B-Lyra-v1
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ license: cc-by-nc-4.0
7
+ tags:
8
+ - 4-bit
9
+ - AWQ
10
+ - text-generation
11
+ - vllm
12
+ - aprodite
13
+ ---
14
+ # Sao10K/MN-12B-Lyra-v1
15
+
16
+ - Model creator: [Sao10K](https://huggingface.co/Sao10K)
17
+ - Original model: [MN-12B-Lyra-v1](https://huggingface.co/Sao10K/MN-12B-Lyra-v1)
18
+
19
+ ### About AWQ
20
+
21
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
22
+
23
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
24
+
25
+ It is supported by:
26
+
27
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
28
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
29
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
30
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
31
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
32
+ - [Aprodite](https://github.com/PygmalionAI/aphrodite-engine) version 0.3.5 and later