Lone Striker
LoneStriker
AI & ML interests
None yet
Organizations
LoneStriker's activity
How good is the gguf?
3
#3 opened 4 months ago
by
Tom-Neverwinter
Exl Quants please
1
#5 opened 4 months ago
by
rjmehta
exl2
3
#1 opened 4 months ago
by
DmitryPLSKN
4.25bpw version
4
#1 opened 4 months ago
by
Apel-sin
Exl Quant Request
2
#1 opened 6 months ago
by
Clevyby
ExLlamaV2 inference questions
2
#1 opened 4 months ago
by
shawei3000
have a question
2
#1 opened 4 months ago
by
LittenBuzz
measurement file please?
1
#1 opened 5 months ago
by
p1kp4k
did you have to do anything special to quantize this one?
1
#1 opened 5 months ago
by
gghfez
Exl2 Quants of tdrussell/Llama-3-70B-Instruct-Storywriter?
5
#1 opened 5 months ago
by
binahz
2.25 bpw
2
#1 opened 5 months ago
by
eldodemi2039
Exl2 quants please
3
#8 opened 5 months ago
by
rjmehta
Endless generation
3
#1 opened 6 months ago
by
qenthousiast
Lower quants
1
#1 opened 6 months ago
by
Desm0nt
exl2 quants
3
#1 opened 6 months ago
by
Wsdfsdf
eos_token should be <|eot_id|>
5
#1 opened 6 months ago
by
AUTOMATIC
Best model for RP I have ever tried
12
#2 opened 6 months ago
by
Franchu
The model is great but then suddenly writes weird
2
#1 opened 6 months ago
by
Flickaboo
How come I can't
3
#1 opened 6 months ago
by
BigHuggyD
Exl2 Quants
1
#3 opened 6 months ago
by
rjmehta
Exl quant request
2
#1 opened 7 months ago
by
Clevyby
exl2 please
2
#3 opened 7 months ago
by
zypcastles
Exl quant request
3
#1 opened 7 months ago
by
Clevyby
32k Context?
2
#1 opened 7 months ago
by
swiftwater
How can i transform an .safetensor to .gguf file?
2
#1 opened 7 months ago
by
NoneGG
Request for dreamgen/opus-v1.2-70b
3
#1 opened 7 months ago
by
homeworkace
GPTQ/AWQ quant that is runable in vllm?
2
#4 opened 7 months ago
by
Light4Bear
Will this fit on a single 24GB Video card (4090)?
1
#2 opened 7 months ago
by
clevnumb
exl2 quants
1
#1 opened 7 months ago
by
Wsdfsdf
Can you provide the EXL2 quantitative model?
1
#1 opened 7 months ago
by
xldistance
License?
1
#1 opened 7 months ago
by
abipani
Create GGUF for this please
8
#2 opened 7 months ago
by
ishanparihar
I really hope you do a 6bpw
2
#1 opened 7 months ago
by
bigfish3
Tokens overrides (added_tokens_decoder)
2
#1 opened 7 months ago
by
dranger003
Update README.md
#1 opened 7 months ago
by
wolfram
Update README.md
#1 opened 7 months ago
by
wolfram
Update README.md
#1 opened 7 months ago
by
wolfram
What quantization is it? Can be used with vLLM?
1
#1 opened 7 months ago
by
rafa9
7b不如2b,不知道为什么,特别是写代码,和gemma一样乱,但是2b要好很多,不知道有没有和我一样的
1
#1 opened 7 months ago
by
laooopooo
Kindly asking for quants
7
#2 opened 7 months ago
by
wolfram
Can you produce a 2.4bpw exl2 quantisation of this model?
1
#2 opened 7 months ago
by
xldistance
Wont load in TextGen Webui
1
#1 opened 7 months ago
by
GamingDaveUK
LMstudio config from description missing
1
#1 opened 7 months ago
by
ariffinsetya
Weirdness with offloading to VRAM
3
#3 opened 7 months ago
by
Bjorno
this model vs LoneStriker/Smaugv0.1-6.0bpw-h6-exl2
4
#1 opened 7 months ago
by
ainoob101
VRAM require
2
#1 opened 8 months ago
by
paperplanedeemo
Which dataset used for quantization?
1
#1 opened 8 months ago
by
Yhyu13
Could you please quantify the model?
2
#1 opened 8 months ago
by
Serpen
exl2 quantization?
2
#1 opened 8 months ago
by
musicurgy
Exl2 Quants?
3
#3 opened 8 months ago
by
eldodemi2039
exl2?
3
#1 opened 8 months ago
by
bdambrosio
Safetensor naming convention
10
#1 opened 8 months ago
by
dannysemi
🙏🏻Praying for Quantized
11
#12 opened 8 months ago
by
Tonic
Guff
2
#1 opened 8 months ago
by
MarxistLeninist
A minute bug
2
#1 opened 8 months ago
by
sethuiyer
Request for another exl Quant.
9
#1 opened 8 months ago
by
Clevyby
i want to quantize model exl2 and than finetune it
2
#1 opened 8 months ago
by
Mihir1108
This model have quantisation problem sometime (I think)
3
#1 opened 8 months ago
by
Belarrius