Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
amd-shark
/
sdxl-quant-fp8
like
0
Follow
AMD SHARK
17
Model card
Files
Files and versions
Community
1
b7db598
sdxl-quant-fp8
4 contributors
History:
23 commits
GiusFra
Create config.json
b7db598
verified
about 2 months ago
all_linear_sym_8_calib8
Fix names
4 months ago
all_sym_8_calib10
MI250 QKV fused and all layers sym, FP8 attention, guidance scale 8, calib steps 10
4 months ago
brevitas
updated quant_params with QKV fusion
4 months ago
linear_conv_fp8_sdpa_fp16_eq_bl
Create config.json
about 2 months ago
linear_conv_fp8_sdpa_fp16_no_eq_bl
Create config.json
about 2 months ago
linear_conv_fp8_sdpa_fp8_eq_bl
Create config.json
about 2 months ago
linear_conv_fp8_sdpa_fp8_no_eq_bl
Updated sdpa fp8 models
2 months ago
.gitattributes
Safe
2.08 kB
Added models that are fully quantized with FP8.
2 months ago
attn.py
Safe
6.26 kB
Added SDPA math model & test
4 months ago
sdxl.json
Safe
2.19 MB
Upload sdxl.json with huggingface_hub
6 months ago
sdxl.safetensors
Safe
5.14 GB
LFS
Upload sdxl.safetensors with huggingface_hub
6 months ago
test_attn.py
Safe
1.29 kB
Added SDPA math model & test
4 months ago