File size: 1,559 Bytes
4cbe32f
 
 
 
 
 
 
 
 
 
cb6f25d
4cbe32f
 
 
 
 
 
 
 
 
 
 
 
 
 
5a07644
4cbe32f
 
 
 
00dc869
4cbe32f
9d1480d
 
 
4cbe32f
 
5a07644
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
language:
- pl
license: apache-2.0
library_name: transformers
tags:
- finetuned
- gguf
inference: false
pipeline_tag: text-generation
base_model: speakleash/Bielik-11B-v2.2-Instruct
---

<p align="center">
  <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1/raw/main/speakleash_cyfronet.png">
</p>

# Bielik-11B-v2.2-Instruct-HQQ-8bit-128gs

This repo contains HQQ (8-bit, 128 group size) format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct).

<b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br>

### Model description:

* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct)
* **Finetuned from:** [Bielik-11B](https://huggingface.co/speakleash/Bielik-11B)
* **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)

### Responsible for model quantization  
* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.

## Contact Us

If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).