[FEEDBACK] Daily Papers

#32
by kramp HF staff - opened
Hugging Face org
โ€ข
edited Jul 25

Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.

How to submit a paper to the Daily Papers, like @akhaliq (AK)?

  • Submitting is available to paper authors
  • Only recent papers (less than 7d) can be featured on the Daily

Then drop the arxiv id in the form at https://huggingface.co/papers/submit

  • Add medias to the paper (images, videos) when relevant
  • You can start the discussion to engage with the community

Please check out the documentation

We are excited to share our recent work on MLLM architecture design titled "Ovis: Structural Embedding Alignment for Multimodal Large Language Model".

Paper: https://arxiv.org/abs/2405.20797
Github: https://github.com/AIDC-AI/Ovis
Model: https://huggingface.co/AIDC-AI/Ovis-Clip-Llama3-8B
Data: https://huggingface.co/datasets/AIDC-AI/Ovis-dataset

This comment has been hidden
Hugging Face org

@Yiwen-ntu for now we support only videos as paper covers in the Daily.

This comment has been hidden
This comment has been hidden

we are excited to share our work titled "Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models" : https://arxiv.org/abs/2406.12644

๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ We are excited to share a new efficient small language model architecture with parallel Mamba and Attention fusion - Hymba.

We study the tradeoff between Mamba and Attention, exploring how they can be combined, how the attention sink and forced-to-attend phenomena can be mitigated, and how the KV cache can be shared across layers.

The team delivered an end-to-end solution featuring a novel architecture, selecting data, a five-stage training setup, and trained both Base and Instruct models. Release is with open license.

A standout feature is that the Hymba-1.5B Base model outperforms LLaMA 3.2-3B, despite being trained on 7ร— fewer tokens and achieving 12ร— cache reduction.

๐Ÿ˜Š Model: https://huggingface.co/collections/nvidia/hymba-673c35516c12c4b98b5e845f
๐Ÿ“– Paper: https://www.arxiv.org/abs/2411.13676

Inspired by "big-little core" chip design, we introduce "one-big-many-small" grouping for efficient multi-model deployment, cutting storage costs from NM to (1+rN)M!

Paper: https://arxiv.org/abs/2406.08903
Github: github.com/thunlp/Delta-CoMe

Hi AK and HF team,

I am happy to introduce our DiffusionDrive, a real-time end-to-end autonomous driving model, which is much faster (10x reduction in diffusion denoising steps), more accurate (3.5 higher PDMS on NAVSIM), and more diverse (64% higher mode diversity score) than the vanilla diffusion policy. Without bells and whistles, DiffusionDrive achieves record-breaking 88.1 PDMS on NAVSIM benchmark with the same ResNet-34 backbone by directly learning from human demonstrations, while running at a real-time speed of 45 FPS. Please check out our work:

(Paper) ๐Ÿ“‘: https://arxiv.org/abs/2411.15139
(Code) ๐Ÿš€: https://github.com/hustvl/DiffusionDrive

We demonstrate robust and safe driving in the real-world application

@akhaliq @kramp
Dear AK and HF team ,

๐Ÿš€ We are pleased to share our latest research paper, "Beyond Examples: High-level Automated Reasoning Paradigm in In-Context Learning via MCTS," for your consideration, as we believe it may be of significant interest for HF Daily Paper. This work introduces HiAR-ICL, a novel paradigm to enhance the complex reasoning capabilities of large language models.

๐ŸŒŸ Unlike traditional in-context learning, HiAR-ICL shifts the focus from example-based analogical learning to abstract thinking patterns. It employs Monte Carlo Tree Search to explore reasoning paths and creates "thought cards" to guide inferences. By dynamically matching test problems with appropriate thought cards through a proposed cognitive complexity framework, HiAR-ICL achieves state-of-the-art accuracy of 79.6% with 7B model on the challenging MATH benchmark, surpassing both GPT-4o and Claude 3.5.

๐Ÿ“‘ Paper: https://arxiv.org/pdf/2411.18478
๐ŸŒ Project Page: https://jinyangwu.github.io/hiar-icl/

We would greatly appreciate your consideration of our paper for inclusion.

Best regards,
Jinyang Wu, Mingkuan Feng, Shuai Zhang, Feihu Che, Zengqi Wen, Jianhua Tao

image.png

This comment has been hidden

Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.

How to submit a paper to the Daily Papers, like @akhaliq (AK)?

  • Submitting is available to paper authors
  • Only recent papers (less than 7d) can be featured on the Daily

Then drop the arxiv id in the form at https://huggingface.co/papers/submit

  • Add medias to the paper (images, videos) when relevant
  • You can start the discussion to engage with the community

Please check out the documentation

Hi @kramp and @akhaliq please could you help me verify my authorship claim for this paper? https://huggingface.co/papers/2411.15640
Today makes it 6 days and I need to be able to feature it on the paper dailies.

Hi @kramp and @akhaliq ,

I hope you're doing well! I would like to kindly request your assistance in verifying my authorship claim for this paper: https://huggingface.co/papers/2411.18478. Today marks the 6th day, and I would appreciate it if you could help expedite the verification process so that the paper can be featured on the daily papers.

Thank you so much for your help!

Best regards,
Jinyang Wu

This comment has been hidden

Sign up or log in to comment