Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion to
|
|
22 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
23 |
|
24 |
#### This Model
|
25 |
-
This is an intermediate checkpoint with 240K steps and 503B tokens. **We suggest you not use this directly for inference
|
26 |
|
27 |
|
28 |
#### How to use
|
|
|
22 |
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
23 |
|
24 |
#### This Model
|
25 |
+
This is an intermediate checkpoint with 240K steps and 503B tokens. **We suggest you not use this directly for inference.** The [chat model](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.1) is always preferred **
|
26 |
|
27 |
|
28 |
#### How to use
|