Diffusers
Safetensors
sayakpaul HF staff commited on
Commit
da15611
1 Parent(s): 0304760

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -7,6 +7,11 @@ license_link: LICENSE.md
7
 
8
  ## Running Flux.1-dev under 12GBs
9
 
10
- This repository contains the NF4 params for the T5 and transformer of Flux-1-Dev. Check out this [Colab Notebook](https://colab.research.google.com/gist/sayakpaul/4af4d6642bd86921cdc31e5568b545e1/scratchpad.ipynb) for details on how they were obtained.
11
 
12
- Check out [this notebook] (TODO) that shows how to use the checkpoints and run in a free-tier Colab Notebook.
 
 
 
 
 
 
7
 
8
  ## Running Flux.1-dev under 12GBs
9
 
10
+ This repository contains the NF4 params for the T5 and transformer of Flux.1-Dev. Check out this [Colab Notebook](https://colab.research.google.com/gist/sayakpaul/4af4d6642bd86921cdc31e5568b545e1/scratchpad.ipynb) for details on how they were obtained.
11
 
12
+ Check out [this notebook](https://colab.research.google.com/gist/sayakpaul/8fb27a653934c1bc6b013913c346e456/scratchpad.ipynb) that shows how to use the checkpoints and run in a free-tier Colab Notebook.
13
+
14
+ Respective `diffusers` PR: https://github.com/huggingface/diffusers/pull/9213/.
15
+
16
+ > [!NOTE]
17
+ > The checkpoints of this repository were optimized to run on a T4 notebook. More specifically, the compute datatype of the quantized checkpoints was kept to FP16. In practice, if you have a GPU card that supports BF16, you should change the compute datatype to BF16 (`bnb_4bit_compute_dtype`).