Max Current

eldogbbhed

AI & ML interests

None yet

Recent Activity

liked a model about 1 month ago
MeissonFlow/Meissonic
liked a model about 1 month ago
SWivid/F5-TTS
liked a model about 1 month ago
Mi6paulino/Michaelpaulino
View all activity

Organizations

None yet

eldogbbhed's activity

liked a Space 2 months ago
Reacted to Smooke's post with 🧠 2 months ago
view post
Post
598
Chomsky predicting LLMs in 1956, curated by Ryan Rhodes (Rutgers)
Reacted to mlabonne's post with 👍 3 months ago
view post
Post
15891
Large models are surprisingly bad storytellers.

I asked 8 LLMs to "Tell me a bedtime story about bears and waffles."

Claude 3.5 Sonnet and GPT-4o gave me the worst stories: no conflict, no moral, zero creativity.

In contrast, smaller models were quite creative and wrote stories involving talking waffle trees and bears ostracized for their love of waffles.

Here you can see a comparison between Claude 3.5 Sonnet and NeuralDaredevil-8B-abliterated. They both start with a family of bears but quickly diverge in terms of personality, conflict, etc.

I mapped it to the hero's journey to have some kind of framework. Prompt engineering can definitely help here, but it's still disappointing that the larger models don't create better stories right off the bat.

Do you know why smaller models outperform the frontier models here?
·
upvoted an article 3 months ago