Mohammed Hamdy

mmhamdy

AI & ML interests

NLP | Reinforcement Learning

Organizations

mmhamdy's activity

reacted to m-ric's post with 🚀 24 days ago
view post
Post
2231
💥 𝐋-𝐌𝐮𝐥: 𝐀𝐝𝐝𝐢𝐭𝐢𝐨𝐧-𝐎𝐧𝐥𝐲 𝐌𝐮𝐥𝐭𝐢𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐜𝐚𝐧 𝐬𝐥𝐚𝐬𝐡 𝐜𝐨𝐦𝐩𝐮𝐭𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐜𝐨𝐬𝐭𝐬 𝐛𝐲 𝟖𝟎%!

Microsoft researchers dropped a groundbreaking technique that could slash the energy use in transformer computations : their novel "linear-complexity multiplication" (L-Mul) algorithm approximates floating-point multiplication using energy-efficient integer addition instead of costly multiplications.

💡 Quick reminder on how floats are coded on 8 bits (FP8):
In the e4m3 FP8 standard, you encode a number as:
Sign (1 bit) | Exponent (4 bits) | Mantissa (3 bits)
Example: 0 (positive) | 1000 (8) | 101 (1/2 + 1/8 = 0.625)
Calculation: you add one to the mantissa, and multiply it by 2 power (the exponent - a bias term which is 7 for e4m3):

➡️ You get (1 + 0.625) × 2^(8-7) = 3.25

Now back to the paper. 𝗞𝗲𝘆 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀:

⚡️ Multiplication is extremely energy-intensive compared to addition. For 32-bit operations, multiplication (3.7 pJ) uses 37x more energy than addition (0.1 pJ)!

🧮 Traditional floating-point multiplication go like (noting xm the mantissa and xe the exponent): Mul(x,y) = (1 + xm) · 2^xe · (1 + ym) · 2^ye = (1 + xm + ym + xm · ym) · 2^(xe+ye)

💡 L-Mul cleverly approximates this as: L-Mul(x,y) = (1 + xm + ym + 2^-l(m)) · 2^(xe+ye), eliminating the costly xm · ym term

🔧 l(m) term is adaptively set based on mantissa size for optimal accuracy

📊 Benchmarks on the Llama-3.1-8B-Instruct model show L-Mul preserves precision across various NLP tasks, with performance nearly identical to full BFloat16 precision

💬 Authors claim: "We can achieve the same model inference performance while reducing the energy cost of attention computations by 80%."

This breakthrough is still theoretical and would need implementation on dedicated hardware to confirm real-world gains, but it’s a really exciting path for more sustainable AI! 🌱

Read the paper here 👉  Addition is All You Need for Energy-efficient Language Models (2410.00907)
posted an update 26 days ago
view post
Post
1811
🔗 Evaluating Long Context #1: Long Range Arena (LRA)

Accurately evaluating how well language models handle long contexts is crucial, but it's also quite challenging to do well. In this series of posts, we're going to examine the various benchmarks that were proposed to assess long context understanding, starting with Long Range Arens (LRA)

Introduced in 2020, Long Range Arens (LRA) is one of the earliest benchmarks designed to tackle the challenge of long context evaluation.

📌 Key Features of LRA

1️⃣ Diverse Tasks: The LRA benchmark consists of a suite of tasks designed to evaluate model performance on long sequences ranging from 1,000 to 16,000 tokens. These tasks encompass different data types and modalities: Text, Natural and Synthetic Images, and Mathematical Expressions.

2️⃣ Synthetic and Real-world Tasks: LRA is comprised of both synthetic probing tasks and real-world tasks.

3️⃣ Open-Source and Extensible: Implemented in Python using Jax and Flax, the LRA benchmark code is publicly available, making it easy to extend.

📌 Tasks

1️⃣ Long ListOps

2️⃣ Byte-level Text Classification and Document Retrieval

3️⃣ Image Classification

4️⃣ Pathfinder and Pathfinder-X (Long-range spatial dependency)

👨‍💻 Long Range Arena (LRA) Github Repository: https://github.com/google-research/long-range-arena

📄 Long Range Arena (LRA) paper: Long Range Arena: A Benchmark for Efficient Transformers (2011.04006)
reacted to davidberenstein1957's post with ❤️ about 1 month ago
view post
Post
2140
🎉 Exciting News: Argilla 2.2.0 is Here! 🚀

We're thrilled to announce the release of Argilla 2.2.0, packed with powerful new features to enhance your data annotation and LLM workflow:

🗨️ ChatField: Work with text conversations natively in Argilla. Perfect for building datasets for conversational LLMs!
⚙️ Adjustable Task Distribution: Modify settings on the fly and automatically recalculate completed and pending records.
📊 Progress Tracking: Monitor annotation progress directly from the SDK, including user-specific metrics.
🧠 Automatic Settings Inference: Importing datasets from Hugging Face Hub just got easier with automatic settings detection.
📋 Task Templates: Jump-start your projects with pre-built templates for common dataset types.
🔧 Background Jobs Support: Improved performance for long-running tasks (requires Redis).

Upgrade now and supercharge your data workflows!

Check out our full changelog for more details: https://github.com/argilla-io/argilla/compare/v2.1.0...v2.2.0
reacted to davidberenstein1957's post with 🔥 about 1 month ago
view post
Post
2140
🎉 Exciting News: Argilla 2.2.0 is Here! 🚀

We're thrilled to announce the release of Argilla 2.2.0, packed with powerful new features to enhance your data annotation and LLM workflow:

🗨️ ChatField: Work with text conversations natively in Argilla. Perfect for building datasets for conversational LLMs!
⚙️ Adjustable Task Distribution: Modify settings on the fly and automatically recalculate completed and pending records.
📊 Progress Tracking: Monitor annotation progress directly from the SDK, including user-specific metrics.
🧠 Automatic Settings Inference: Importing datasets from Hugging Face Hub just got easier with automatic settings detection.
📋 Task Templates: Jump-start your projects with pre-built templates for common dataset types.
🔧 Background Jobs Support: Improved performance for long-running tasks (requires Redis).

Upgrade now and supercharge your data workflows!

Check out our full changelog for more details: https://github.com/argilla-io/argilla/compare/v2.1.0...v2.2.0
reacted to davidberenstein1957's post with 🚀 about 1 month ago
view post
Post
2140
🎉 Exciting News: Argilla 2.2.0 is Here! 🚀

We're thrilled to announce the release of Argilla 2.2.0, packed with powerful new features to enhance your data annotation and LLM workflow:

🗨️ ChatField: Work with text conversations natively in Argilla. Perfect for building datasets for conversational LLMs!
⚙️ Adjustable Task Distribution: Modify settings on the fly and automatically recalculate completed and pending records.
📊 Progress Tracking: Monitor annotation progress directly from the SDK, including user-specific metrics.
🧠 Automatic Settings Inference: Importing datasets from Hugging Face Hub just got easier with automatic settings detection.
📋 Task Templates: Jump-start your projects with pre-built templates for common dataset types.
🔧 Background Jobs Support: Improved performance for long-running tasks (requires Redis).

Upgrade now and supercharge your data workflows!

Check out our full changelog for more details: https://github.com/argilla-io/argilla/compare/v2.1.0...v2.2.0
reacted to Jofthomas's post with 🔥 3 months ago
view post
Post
3165
Everchanging Quest is out !

It is an LLM controlled Rogue-Like in which the LLM gets a markdown representation of the map, and should generate a JSON with the objective to fulfill on the map as well as the necessary objects and their placements.

Come test it on the space :
Jofthomas/Everchanging-Quest
  • 2 replies
·
reacted to joylarkin's post with 🚀 3 months ago
view post
Post
2999
Introducing Fineweb-Edu-Fortified: An enhanced Fineweb-Edu dataset. 📚

This dataset is tailored for NLP tasks and helps streamline model training by offering a more refined, unique dataset. Perfect for startups and researchers looking for high-quality educational content to train, evaluate, or fine-tune AI models. The dataset is based on the Fineweb-Edu subset of the large Fineweb dataset and includes:

- Exact-match deduplication across all crawls
- Embeddings for each row using the TaylorAI/bge-micro model
- Count column indicating duplication frequency
- Includes data from 95 Common Crawl crawls (2013-2024)
- Rows have been reduced from 1.279B to 0.324B after deduplication
- It is comprised of ~375B tokens (down from 1,320B in Fineweb-Edu)

Access the entire Fineweb-Edu-Fortified dataset on Hugging Face → airtrain-ai/fineweb-edu-fortified

Try a semantic search demo via this Hugging Face Space → airtrain-ai/fineweb-edu-fortified-search-demo

Many thanks to the amazing @josh-sematic for his work on this project, the Fineweb/Fineweb-Edu team at Hugging Face for producing the original datasets and for their support during our work on Fineweb-Edu-Fortified, and also thanks to  @underspirit for pointing out the reduction in dataset size that could be achieved via deduplication. 🤗

reacted to joylarkin's post with 👍 3 months ago
view post
Post
2999
Introducing Fineweb-Edu-Fortified: An enhanced Fineweb-Edu dataset. 📚

This dataset is tailored for NLP tasks and helps streamline model training by offering a more refined, unique dataset. Perfect for startups and researchers looking for high-quality educational content to train, evaluate, or fine-tune AI models. The dataset is based on the Fineweb-Edu subset of the large Fineweb dataset and includes:

- Exact-match deduplication across all crawls
- Embeddings for each row using the TaylorAI/bge-micro model
- Count column indicating duplication frequency
- Includes data from 95 Common Crawl crawls (2013-2024)
- Rows have been reduced from 1.279B to 0.324B after deduplication
- It is comprised of ~375B tokens (down from 1,320B in Fineweb-Edu)

Access the entire Fineweb-Edu-Fortified dataset on Hugging Face → airtrain-ai/fineweb-edu-fortified

Try a semantic search demo via this Hugging Face Space → airtrain-ai/fineweb-edu-fortified-search-demo

Many thanks to the amazing @josh-sematic for his work on this project, the Fineweb/Fineweb-Edu team at Hugging Face for producing the original datasets and for their support during our work on Fineweb-Edu-Fortified, and also thanks to  @underspirit for pointing out the reduction in dataset size that could be achieved via deduplication. 🤗

reacted to joylarkin's post with 🔥 3 months ago
view post
Post
2999
Introducing Fineweb-Edu-Fortified: An enhanced Fineweb-Edu dataset. 📚

This dataset is tailored for NLP tasks and helps streamline model training by offering a more refined, unique dataset. Perfect for startups and researchers looking for high-quality educational content to train, evaluate, or fine-tune AI models. The dataset is based on the Fineweb-Edu subset of the large Fineweb dataset and includes:

- Exact-match deduplication across all crawls
- Embeddings for each row using the TaylorAI/bge-micro model
- Count column indicating duplication frequency
- Includes data from 95 Common Crawl crawls (2013-2024)
- Rows have been reduced from 1.279B to 0.324B after deduplication
- It is comprised of ~375B tokens (down from 1,320B in Fineweb-Edu)

Access the entire Fineweb-Edu-Fortified dataset on Hugging Face → airtrain-ai/fineweb-edu-fortified

Try a semantic search demo via this Hugging Face Space → airtrain-ai/fineweb-edu-fortified-search-demo

Many thanks to the amazing @josh-sematic for his work on this project, the Fineweb/Fineweb-Edu team at Hugging Face for producing the original datasets and for their support during our work on Fineweb-Edu-Fortified, and also thanks to  @underspirit for pointing out the reduction in dataset size that could be achieved via deduplication. 🤗

reacted to qq8933's post with 🔥 3 months ago
reacted to qq8933's post with 🚀 3 months ago
reacted to qq8933's post with 🤗 3 months ago
posted an update 3 months ago
view post
Post
3627
🚀 Introducing The Open Language Models List

This is a work-in-progress list of open language models with permissive licenses such as MIT, Apache 2.0, or other similar licenses.

The list is not limited to only autoregressive models or even only transformers models, and it includes many SSMs, and SSM-Transformers hybrids.

🤗 Contributions, corrections, and feedback are very welcome!

The Open Language Models List: https://github.com/mmhamdy/open-language-models
  • 2 replies
·
reacted to not-lain's post with 🔥 3 months ago
view post
Post
7388
I am now a huggingface fellow 🥳
·
reacted to not-lain's post with 🤗 3 months ago
view post
Post
7388
I am now a huggingface fellow 🥳
·
reacted to severo's post with ❤️ 3 months ago
reacted to severo's post with 🚀 3 months ago
reacted to WizardLM's post with 🚀 4 months ago
view post
Post
8663
🔥 🔥🔥
Excited to announce WizardLM new Paper: Auto Evol-Instruct!

🐦 Twitter: https://x.com/WizardLM_AI/status/1812857977122202087

📃 Paper: https://arxiv.org/pdf/2406.00770

🤖 1. Fully AI-Powered Pipeline

Auto Evol-Instruct automatically involves an iterative process of optimizing an Evol-Instruct V1 into an optimal one. The pipeline consists of two critical stages: Evol Trajectory Analysis, where the optimizer LLM analyzes the issues and failures exposed in instruction evolution performed by the evol LLM, and Evolving Method Optimization, where the optimizer LLM addresses these issues to progressively develop an effective evolving method. The optimal evolving method is then used to convert the entire instruction dataset into more diverse and complex forms, facilitating improved instruction tuning.

📈2. Scaling Evol-Instruct with Arena Learning

With Auto Evol-Instruct, the evolutionary synthesis data of WizardLM-2 has scaled up from WizardLM-1 to dozens of domains, covering tasks in all aspects of large language models. This allows Arena Learning to train and learn from an almost infinite pool of high-difficulty instruction data, fully unlocking all the potential of Arena Learning.
  • 1 reply
·
reacted to WizardLM's post with 👍 4 months ago
view post
Post
8663
🔥 🔥🔥
Excited to announce WizardLM new Paper: Auto Evol-Instruct!

🐦 Twitter: https://x.com/WizardLM_AI/status/1812857977122202087

📃 Paper: https://arxiv.org/pdf/2406.00770

🤖 1. Fully AI-Powered Pipeline

Auto Evol-Instruct automatically involves an iterative process of optimizing an Evol-Instruct V1 into an optimal one. The pipeline consists of two critical stages: Evol Trajectory Analysis, where the optimizer LLM analyzes the issues and failures exposed in instruction evolution performed by the evol LLM, and Evolving Method Optimization, where the optimizer LLM addresses these issues to progressively develop an effective evolving method. The optimal evolving method is then used to convert the entire instruction dataset into more diverse and complex forms, facilitating improved instruction tuning.

📈2. Scaling Evol-Instruct with Arena Learning

With Auto Evol-Instruct, the evolutionary synthesis data of WizardLM-2 has scaled up from WizardLM-1 to dozens of domains, covering tasks in all aspects of large language models. This allows Arena Learning to train and learn from an almost infinite pool of high-difficulty instruction data, fully unlocking all the potential of Arena Learning.
  • 1 reply
·
reacted to yuchenlin's post with 🤗 5 months ago
view post
Post
2269
Introducing **BaseChat**!
https://huggingface.co/spaces/allenai/BaseChat_URIAL

This is a demo for our URIAL paper that enables base LLMs to chat with in-context alignment. You can talk directly with base, untuned LLMs to find out what knowledge and skills they have already learned from pre-training instead of SFT or xPO or RLHF. Also, you can use this to explore the pre-training data of base LLMs by chatting! I found a very interesting case: Base version of Llama-3-8B often thinks it is built by OpenAI, lol.