nisten commited on
Commit
a2cf896
1 Parent(s): 370fbdb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -12,10 +12,15 @@ This repository contains CPU-optimized GGUF quantizations of the Meta-Llama-3.1-
12
  1. Q4_0_48 (CPU Optimized): ~264 GB
13
  2. BF16: ~855 GB
14
  3. Q8_0: ~435 GB
 
15
 
16
- ## Download Instructions
17
 
18
- To download the model files, you can use aria2c for faster, multi-connection downloads. Here are the commands for each quantization:
 
 
 
 
19
 
20
  ### Q4_0_48 (CPU Optimized) Version
21
 
 
12
  1. Q4_0_48 (CPU Optimized): ~264 GB
13
  2. BF16: ~855 GB
14
  3. Q8_0: ~435 GB
15
+ x. more coming...
16
 
17
+ ## Use Aria2 for parallelized downloads, links will download 9x faster
18
 
19
+ >>[!TIP]🐧 On Linux `sudo apt install -y aria2`
20
+ >>
21
+ >>🍎 On Mac `brew install aria2`
22
+ >>
23
+ >>Feel free to paste these all in at once or one at a time
24
 
25
  ### Q4_0_48 (CPU Optimized) Version
26