Joseph
Joseph717171
AI & ML interests
None yet
Recent Activity
reacted to
cfahlgren1's
post
with π₯
about 5 hours ago
reacted to
cfahlgren1's
post
with π
about 5 hours ago
New activity
about 6 hours ago
arcee-ai/SuperNova-Medius
Organizations
Joseph717171's activity
Wicked cool experiment!
2
#1 opened 1 day ago
by
Joseph717171
Best prose in a model I've ever seen.
5
#5 opened about 1 month ago
by
Dsol58
New activity in
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_Base_Embeds_Initialized_to_Special_Instruct_Toks_dtypeF32
28 days ago
This LLM is hallucinating like crazy. Can someone verify these prompts?
28
#3 opened about 1 month ago
by
phil111
Ideal quantization levels
2
#6 opened about 1 month ago
by
jadbox
That was fast!
3
#1 opened about 1 month ago
by
rollercoasterX
different Q4 models
1
#1 opened about 1 month ago
by
animax
what is your "continuous finetuning"
7
#2 opened about 2 months ago
by
MaziyarPanahi
Explain the rationale for your density values
#1 opened about 2 months ago
by
Joseph717171
Explain these Benchmark Results
2
#2 opened about 2 months ago
by
Joseph717171
Distill Llama-3.2-1B-Instruct from Llama-405B-Instruct to make SuperNova-Pico
1
#14 opened about 2 months ago
by
Joseph717171
Paper? π
1
#1 opened about 2 months ago
by
Joseph717171
This repo revision has at least one file that has been marked as unsafe.
2
#11 opened 2 months ago
by
MayensGuds
Why is the tokenizer.json not the same as LLaMa-3.1-8B-Instruct
1
#6 opened 2 months ago
by
Joseph717171
Fixed tokenizer.json, so it is equal with LLama-3.1-8B-Instruct's tokenizer.json
1
#5 opened 2 months ago
by
Joseph717171
Bad at Following Following instructions
3
#3 opened 2 months ago
by
Daemontatox
Love the UGI Leaderboard. We should add a quantized category - as not all quantizations are equal
1
#37 opened 2 months ago
by
Joseph717171
Bartowski! Let's see how your imatrix differs from mine. π
5
#2 opened 3 months ago
by
Joseph717171