Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,10 @@ Convert and Quantized of [TinyLlama-1.1B-intermediate-step-1431k-3T](https://hug
|
|
8 |
|
9 |
using llama.cpp
|
10 |
|
|
|
|
|
|
|
|
|
11 |
## Ollama
|
12 |
- Model Page https://ollama.com/pacozaa/tinyllama
|
13 |
- `ollama run pacozaa/tinyllama`
|
|
|
8 |
|
9 |
using llama.cpp
|
10 |
|
11 |
+
## Description
|
12 |
+
|
13 |
+
This is for my own experiment with Ollama, because Ollama usually put chat fine-tuning in their library. It would be hard to know if my fine-tuning LoRA Adapter works if I don't use base/pretrain model version
|
14 |
+
|
15 |
## Ollama
|
16 |
- Model Page https://ollama.com/pacozaa/tinyllama
|
17 |
- `ollama run pacozaa/tinyllama`
|