PY007 Felladrin commited on
Commit
314e0f6
β€’
1 Parent(s): 1fa3971

Update TinyLlama_logo.png in the Readme (#2)

Browse files

- Update TinyLlama_logo.png in the Readme (dfbed0d2a5fdaac9ebcdf6317beddb584bcebdda)


Co-authored-by: Victor Nogueira <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ https://github.com/jzhang38/TinyLlama
16
  The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.
17
 
18
  <div align="center">
19
- <img src="./TinyLlama_logo.png" width="300"/>
20
  </div>
21
 
22
  We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
 
16
  The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.
17
 
18
  <div align="center">
19
+ <img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
20
  </div>
21
 
22
  We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.