RichardErkhov commited on
Commit
2c69617
1 Parent(s): c12bb91

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Giraffe-v2-70b-32k - GGUF
11
+ - Model creator: https://huggingface.co/abacusai/
12
+ - Original model: https://huggingface.co/abacusai/Giraffe-v2-70b-32k/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Giraffe-v2-70b-32k.Q2_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q2_K.gguf) | Q2_K | 23.71GB |
18
+ | [Giraffe-v2-70b-32k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
19
+ | [Giraffe-v2-70b-32k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ3_S.gguf) | IQ3_S | 27.86GB |
20
+ | [Giraffe-v2-70b-32k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
21
+ | [Giraffe-v2-70b-32k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ3_M.gguf) | IQ3_M | 28.82GB |
22
+ | [Giraffe-v2-70b-32k.Q3_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K.gguf) | Q3_K | 30.99GB |
23
+ | [Giraffe-v2-70b-32k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
24
+ | [Giraffe-v2-70b-32k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
25
+ | [Giraffe-v2-70b-32k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
26
+ | [Giraffe-v2-70b-32k.Q4_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q4_0.gguf) | Q4_0 | 36.2GB |
27
+ | [Giraffe-v2-70b-32k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
28
+ | [Giraffe-v2-70b-32k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/blob/main/Giraffe-v2-70b-32k.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
29
+ | [Giraffe-v2-70b-32k.Q4_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q4_K | 38.58GB |
30
+ | [Giraffe-v2-70b-32k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q4_K_M | 38.58GB |
31
+ | [Giraffe-v2-70b-32k.Q4_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q4_1 | 40.2GB |
32
+ | [Giraffe-v2-70b-32k.Q5_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_0 | 44.2GB |
33
+ | [Giraffe-v2-70b-32k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_K_S | 44.2GB |
34
+ | [Giraffe-v2-70b-32k.Q5_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_K | 45.41GB |
35
+ | [Giraffe-v2-70b-32k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_K_M | 45.41GB |
36
+ | [Giraffe-v2-70b-32k.Q5_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q5_1 | 48.2GB |
37
+ | [Giraffe-v2-70b-32k.Q6_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q6_K | 52.7GB |
38
+ | [Giraffe-v2-70b-32k.Q8_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-v2-70b-32k-gguf/tree/main/) | Q8_0 | 68.26GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ tags:
46
+ - llama2
47
+ ---
48
+
49
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/DJHrZmfoy-0TzNChTrtxP.png)
50
+
51
+ ## Model Details
52
+
53
+ ### Model Description
54
+
55
+
56
+ We have followed up on our previous training runs related to extending the context length
57
+ of Llama models. The associated github repository
58
+
59
+ https://github.com/abacusai/long-context
60
+
61
+ has some basic details on our approach and metrics. We have also published a paper on arXiv
62
+ that covers our experiments and analysis a lot more comprehensively.
63
+
64
+ http://arxiv.org/abs/2308.10882
65
+
66
+ - **Developed by:** [Abacus.AI](https://abacus.ai)
67
+ - **Model type:** Transformer based autoregressive causal language model
68
+ - **License:** Llama 2 Community License: https://github.com/facebookresearch/llama/blob/main/LICENSE
69
+ - **Finetuned from model:** Llama V2 70B
70
+
71
+ ### Usage
72
+
73
+ To use this model at longer lengths the model needs to be patched to interpolate the longer context
74
+ lengths. It will not work if it is simply loaded with the `AutoModel` framework of `transformers`.
75
+ For full details and usage see:
76
+
77
+ https://github.com/abacusai/Long-Context
78
+
79
+ The evaluation section has detailed code for how to load and patch the model for inference (or further fine-tuning).
80
+ Note in particular the `max_position_embeddings` is not relevant since the patched module dynamically reallocates
81
+ the position buffers as required.
82
+
83
+ The tokenizer corresponding to this model is https://huggingface.co/abacusai/Giraffe-v1-Tokenizer.
84
+
85
+ Using the code in the repository you can load this model with the following code:
86
+ ```python
87
+ from models import load_model, load_tokenizer
88
+ tokenizer = load_tokenizer()
89
+ model = load_model('abacusai/Giraffe-v2-70b-32k', scale=8)
90
+ ```
91
+
92
+