olafgeibig
commited on
Commit
•
66441f9
1
Parent(s):
b6652bb
Update README.md
Browse files
README.md
CHANGED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- Yukang/LongAlpaca-16k-length
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
tags:
|
8 |
+
- facebook
|
9 |
+
- meta
|
10 |
+
- llama
|
11 |
+
- llama-3
|
12 |
+
- GGUF
|
13 |
+
license: other
|
14 |
+
license_name: llama3
|
15 |
+
license_link: LICENSE
|
16 |
+
---
|
17 |
+
# GGUFs of Llama-3-8B-16K
|
18 |
+
GGUF conversion and quantization of https://huggingface.co/mattshumer/Llama-3-8B-16K
|
19 |
+
|
20 |
+
Done with Maxime Labonne's AutoGGUF
|
21 |
+
|
22 |
+
## Orginal model card
|
23 |
+
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the `Yukang/LongAlpaca-16k-length` dataset.
|
24 |
+
|
25 |
+
`rope_theta` was set to `1000000.0`. Trained with Axolotl.
|