Steve Li
CHNtentes
AI & ML interests
None yet
Recent Activity
New activity
3 days ago
NexaAIDev/omnivision-968M
New activity
16 days ago
city96/stable-diffusion-3.5-medium-gguf
New activity
19 days ago
stabilityai/stable-diffusion-3.5-medium
Organizations
None yet
CHNtentes's activity
transformers version?
1
#5 opened 3 days ago
by
CHNtentes
Q4_0, Q4_1, Q5_0, Q5_1 can be dropped?
1
#1 opened 16 days ago
by
CHNtentes
Where is 't5xxl.safetensors' ?
4
#12 opened 20 days ago
by
ajavamind
Hardware requirements
4
#10 opened 2 months ago
by
ZahirHamroune
T4 - bfloat 16 not support
10
#2 opened 2 months ago
by
SylvainV
🚩 Report: Spam
#150 opened 2 months ago
by
CHNtentes
Is it using ggml to compute?
1
#30 opened 3 months ago
by
CHNtentes
For the fastest inference on 12GB VRAM, are the following GGUF models appropriate to use?
3
#4 opened 3 months ago
by
ViratX
Inquiry on Minimum Configuration and Cost for Running Gemma-2-9B Model Efficiently
3
#39 opened 3 months ago
by
ltkien2003
Error in readme?
1
#6 opened 3 months ago
by
CHNtentes
Good work!
1
#1 opened 3 months ago
by
CHNtentes
Compared to the regular FP8 model, what is the better performance of the 8BIT model here
4
#16 opened 3 months ago
by
demo001s
Please explain the difference between the two models
3
#11 opened 3 months ago
by
martjay
k-quants possible?
5
#2 opened 3 months ago
by
CHNtentes
weight dtype "default" very slow
3
#44 opened 4 months ago
by
D3NN15
How do you get this to work?
7
#36 opened 4 months ago
by
BirdPerson22
max resolution?
4
#32 opened 4 months ago
by
CHNtentes
Why 12b? Who could run that locally?
47
#1 opened 4 months ago
by
kaidu88