Crispin Almodovar's picture
4 6

Crispin Almodovar

calmodovar
·

AI & ML interests

NLP, log anomaly detection, cyber intelligence

Recent Activity

View all activity

Organizations

calmodovar's activity

upvoted an article 22 days ago
view article
Article

Visually Multilingual: Introducing mcdse-2b

By marco
37
Reacted to singhsidhukuldeep's post with 👀 about 1 month ago
view post
Post
2158
While Google's Transformer might have introduced "Attention is all you need," Microsoft and Tsinghua University are here with the DIFF Transformer, stating, "Sparse-Attention is all you need."

The DIFF Transformer outperforms traditional Transformers in scaling properties, requiring only about 65% of the model size or training tokens to achieve comparable performance.

The secret sauce? A differential attention mechanism that amplifies focus on relevant context while canceling out noise, leading to sparser and more effective attention patterns.

How?
- It uses two separate softmax attention maps and subtracts them.
- It employs a learnable scalar λ for balancing the attention maps.
- It implements GroupNorm for each attention head independently.
- It is compatible with FlashAttention for efficient computation.

What do you get?
- Superior long-context modeling (up to 64K tokens).
- Enhanced key information retrieval.
- Reduced hallucination in question-answering and summarization tasks.
- More robust in-context learning, less affected by prompt order.
- Mitigation of activation outliers, opening doors for efficient quantization.

Extensive experiments show DIFF Transformer's advantages across various tasks and model sizes, from 830M to 13.1B parameters.

This innovative architecture could be a game-changer for the next generation of LLMs. What are your thoughts on DIFF Transformer's potential impact?
  • 1 reply
·
upvoted an article 4 months ago
upvoted an article 6 months ago
view article
Article

Multimodal Augmentation for Documents: Recovering “Comprehension” in “Reading and Comprehension” task

17