Robert Sinclair
ZeroWw
AI & ML interests
LLMs optimization (model quantization and back-end optimizations) so that LLMs can run on computers of people with both kidneys. Discord: https://discord.com/channels/@robert_46007
Recent Activity
New activity
25 days ago
DavidAU/L3.2-Rogue-Creative-Instruct-Uncensored-Abliterated-7B-GGUF:Silly version
updated
a model
25 days ago
ZeroWw/L3.2-Rogue-Creative-Instruct-Uncensored-Abliterated-7B-D_AU-SILLY
Organizations
ZeroWw's activity
Brainstorm 40x method developed by David_AU
4
#1 opened 25 days ago
by
ZeroWw
Silly version
#2 opened 25 days ago
by
ZeroWw
My quants and silly expriment.
2
#1 opened 28 days ago
by
ZeroWw
Any chance of a 1B/2B/3B/4B model?
2
#5 opened 29 days ago
by
ZeroWw
Request
1
#1 opened about 1 month ago
by
Zeldazackman
Question about your quantization method
3
#1 opened about 1 month ago
by
rollercoasterX
Where is Ministral 3B?
1
#9 opened about 1 month ago
by
ZeroWw
1B and 3B are nice. Please make also an 8B so we can compare it to gemini flash 8B.
3
#20 opened about 2 months ago
by
ZeroWw
Heads up: this isn't the new ministral 3b
2
#1 opened about 1 month ago
by
bartowski
General discussion and feedback.
3
#1 opened about 1 month ago
by
xxx31dingdong
My quants and "silly" experiment.
1
#1 opened 3 months ago
by
ZeroWw
My quants and my "silly" version.
#4 opened 3 months ago
by
ZeroWw
Dataset Viewer issue: JobManagerCrashedError
6
#1 opened 4 months ago
by
ZeroWw
Can't find a way to make it work with llama.cpp
1
#102 opened 5 months ago
by
ZeroWw
My quants and "silly" experiment.
1
#2 opened 3 months ago
by
ZeroWw
My alternate quantizations.
5
#3 opened 4 months ago
by
ZeroWw
My alternate quantizations and the "silly" expertiment.
#2 opened 4 months ago
by
ZeroWw
How about a scaled down version like 7B/8B?
3
#2 opened 4 months ago
by
ZeroWw
My alternative quants and "silly" experiment.
#13 opened 4 months ago
by
ZeroWw
ZeroWwcodegeex4-all-9b.q6_k not good
1
#2 opened 4 months ago
by
ngoquytuan