Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
1
4
Abdideen
zabdideen
Follow
0 followers
·
1 following
AI & ML interests
None yet
Recent Activity
Reacted to
ImranzamanML
's
post
with 🧠
about 2 months ago
Instead of calculating errors, LLMs are better at doing self-evaluation! It's easier to assess the quality of a response than to generate one which enables LLM models to evaluate their own performance. It’s like trying to figure out how many ingredients you left out while cooking a recipe but without knowing exactly which ones you missed. LLM models like experienced cooks, can’t always tell you what specific step they skipped but they can guess how close they got to the final dish. For example, if your meal tastes 75%, you know something is off, but you are not sure what exactly. Now instead of focusing on identifying every missed ingredient, think about just estimating how well the dish turned out overall. It’s easier to guess if the meal tastes good than to pinpoint each small mistake. LLMs do the same, they estimate how well they performed without knowing every single error, allowing them to self-evaluate! https://huggingface.co/meta-llama/Llama-3.2-1B
Reacted to
ImranzamanML
's
post
with ➕
about 2 months ago
Instead of calculating errors, LLMs are better at doing self-evaluation! It's easier to assess the quality of a response than to generate one which enables LLM models to evaluate their own performance. It’s like trying to figure out how many ingredients you left out while cooking a recipe but without knowing exactly which ones you missed. LLM models like experienced cooks, can’t always tell you what specific step they skipped but they can guess how close they got to the final dish. For example, if your meal tastes 75%, you know something is off, but you are not sure what exactly. Now instead of focusing on identifying every missed ingredient, think about just estimating how well the dish turned out overall. It’s easier to guess if the meal tastes good than to pinpoint each small mistake. LLMs do the same, they estimate how well they performed without knowing every single error, allowing them to self-evaluate! https://huggingface.co/meta-llama/Llama-3.2-1B
Reacted to
ImranzamanML
's
post
with 😎
about 2 months ago
Instead of calculating errors, LLMs are better at doing self-evaluation! It's easier to assess the quality of a response than to generate one which enables LLM models to evaluate their own performance. It’s like trying to figure out how many ingredients you left out while cooking a recipe but without knowing exactly which ones you missed. LLM models like experienced cooks, can’t always tell you what specific step they skipped but they can guess how close they got to the final dish. For example, if your meal tastes 75%, you know something is off, but you are not sure what exactly. Now instead of focusing on identifying every missed ingredient, think about just estimating how well the dish turned out overall. It’s easier to guess if the meal tastes good than to pinpoint each small mistake. LLMs do the same, they estimate how well they performed without knowing every single error, allowing them to self-evaluate! https://huggingface.co/meta-llama/Llama-3.2-1B
View all activity
Organizations
None yet
zabdideen
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
liked
a dataset
about 2 months ago
ImranzamanML/Clinical_Documents_on_Syndromes_Disease
Viewer
•
Updated
Apr 11
•
21.1k
•
135
•
3
liked
3 models
about 2 months ago
ImranzamanML/ai_psychiatrist
Updated
Aug 22
•
1
ImranzamanML/finetuned_llama3.1
Updated
Aug 22
•
1
ImranzamanML/1B_finetuned_llama3.2
Updated
Oct 6
•
3