date
timestamp[ns] | arxiv_id
stringlengths 10
10
| title
stringlengths 8
177
| authors
sequencelengths 1
942
| github
stringlengths 0
115
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2024-10-23T00:00:00 | 2410.16266 | 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors | [
"Xi Liu",
"Chaoyi Zhou",
"Siyu Huang"
] | Novel-view synthesis aims to generate novel views of a scene from multiple input images or videos, and recent advancements like 3D Gaussian splatting (3DGS) have achieved notable success in producing photorealistic renderings with efficient pipelines. However, generating high-quality novel views under challenging settings, such as sparse input views, remains difficult due to insufficient information in under-sampled areas, often resulting in noticeable artifacts. This paper presents 3DGS-Enhancer, a novel pipeline for enhancing the representation quality of 3DGS representations. We leverage 2D video diffusion priors to address the challenging 3D view consistency problem, reformulating it as achieving temporal consistency within a video generation process. 3DGS-Enhancer restores view-consistent latent features of rendered novel views and integrates them with the input views through a spatial-temporal decoder. The enhanced views are then used to fine-tune the initial 3DGS model, significantly improving its rendering performance. Extensive experiments on large-scale datasets of unbounded scenes demonstrate that 3DGS-Enhancer yields superior reconstruction performance and high-fidelity rendering results compared to state-of-the-art methods. The project webpage is https://xiliu8006.github.io/3DGS-Enhancer-project . |
|
2024-10-23T00:00:00 | 2410.17241 | Frontiers in Intelligent Colonoscopy | [
"Ge-Peng Ji",
"Jingyi Liu",
"Peng Xu",
"Nick Barnes",
"Fahad Shahbaz Khan",
"Salman Khan",
"Deng-Ping Fan"
] | https://github.com/ai4colonoscopy/IntelliScope | Colonoscopy is currently one of the most sensitive screening methods for colorectal cancer. This study investigates the frontiers of intelligent colonoscopy techniques and their prospective implications for multimodal medical applications. With this goal, we begin by assessing the current data-centric and model-centric landscapes through four tasks for colonoscopic scene perception, including classification, detection, segmentation, and vision-language understanding. This assessment enables us to identify domain-specific challenges and reveals that multimodal research in colonoscopy remains open for further exploration. To embrace the coming multimodal era, we establish three foundational initiatives: a large-scale multimodal instruction tuning dataset ColonINST, a colonoscopy-designed multimodal language model ColonGPT, and a multimodal benchmark. To facilitate ongoing monitoring of this rapidly evolving field, we provide a public website for the latest updates: https://github.com/ai4colonoscopy/IntelliScope. |
2024-10-23T00:00:00 | 2410.16392 | LLM-based Optimization of Compound AI Systems: A Survey | [
"Matthieu Lin",
"Jenny Sheng",
"Andrew Zhao",
"Shenzhi Wang",
"Yang Yue",
"Yiran Wu",
"Huan Liu",
"Jun Liu",
"Gao Huang",
"Yong-Jin Liu"
] | https://github.com/linyuhongg/LLM-based-Optimization-of-Compound-AI-Systems | In a compound AI system, components such as an LLM call, a retriever, a code interpreter, or tools are interconnected. The system's behavior is primarily driven by parameters such as instructions or tool definitions. Recent advancements enable end-to-end optimization of these parameters using an LLM. Notably, leveraging an LLM as an optimizer is particularly efficient because it avoids gradient computation and can generate complex code and instructions. This paper presents a survey of the principles and emerging trends in LLM-based optimization of compound AI systems. It covers archetypes of compound AI systems, approaches to LLM-based end-to-end optimization, and insights into future directions and broader impacts. Importantly, this survey uses concepts from program analysis to provide a unified view of how an LLM optimizer is prompted to optimize a compound AI system. The exhaustive list of paper is provided at https://github.com/linyuhongg/LLM-based-Optimization-of-Compound-AI-Systems. |
2024-10-24T00:00:00 | 2410.18072 | WorldSimBench: Towards Video Generation Models as World Simulators | [
"Yiran Qin",
"Zhelun Shi",
"Jiwen Yu",
"Xijun Wang",
"Enshen Zhou",
"Lijun Li",
"Zhenfei Yin",
"Xihui Liu",
"Lu Sheng",
"Jing Shao",
"Lei Bai",
"Wanli Ouyang",
"Ruimao Zhang"
] | Recent advancements in predictive models have demonstrated exceptional capabilities in predicting the future state of objects and scenes. However, the lack of categorization based on inherent characteristics continues to hinder the progress of predictive model development. Additionally, existing benchmarks are unable to effectively evaluate higher-capability, highly embodied predictive models from an embodied perspective. In this work, we classify the functionalities of predictive models into a hierarchy and take the first step in evaluating World Simulators by proposing a dual evaluation framework called WorldSimBench. WorldSimBench includes Explicit Perceptual Evaluation and Implicit Manipulative Evaluation, encompassing human preference assessments from the visual perspective and action-level evaluations in embodied tasks, covering three representative embodied scenarios: Open-Ended Embodied Environment, Autonomous, Driving, and Robot Manipulation. In the Explicit Perceptual Evaluation, we introduce the HF-Embodied Dataset, a video assessment dataset based on fine-grained human feedback, which we use to train a Human Preference Evaluator that aligns with human perception and explicitly assesses the visual fidelity of World Simulators. In the Implicit Manipulative Evaluation, we assess the video-action consistency of World Simulators by evaluating whether the generated situation-aware video can be accurately translated into the correct control signals in dynamic environments. Our comprehensive evaluation offers key insights that can drive further innovation in video generation models, positioning World Simulators as a pivotal advancement toward embodied artificial intelligence. |
|
2024-10-24T00:00:00 | 2410.17637 | MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models | [
"Ziyu Liu",
"Yuhang Zang",
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Cao",
"Haodong Duan",
"Conghui He",
"Yuanjun Xiong",
"Dahua Lin",
"Jiaqi Wang"
] | Visual preference alignment involves training Large Vision-Language Models (LVLMs) to predict human preferences between visual inputs. This is typically achieved by using labeled datasets of chosen/rejected pairs and employing optimization algorithms like direct preference optimization (DPO). Existing visual alignment methods, primarily designed for single-image scenarios, struggle to effectively handle the complexity of multi-image tasks due to the scarcity of diverse training data and the high cost of annotating chosen/rejected pairs. We present Multi-Image Augmented Direct Preference Optimization (MIA-DPO), a visual preference alignment approach that effectively handles multi-image inputs. MIA-DPO mitigates the scarcity of diverse multi-image training data by extending single-image data with unrelated images arranged in grid collages or pic-in-pic formats, significantly reducing the costs associated with multi-image data annotations. Our observation reveals that attention values of LVLMs vary considerably across different images. We use attention values to identify and filter out rejected responses the model may have mistakenly focused on. Our attention-aware selection for constructing the chosen/rejected pairs without relying on (i) human annotation, (ii) extra data, and (iii) external models or APIs. MIA-DPO is compatible with various architectures and outperforms existing methods on five multi-image benchmarks, achieving an average performance boost of 3.0% on LLaVA-v1.5 and 4.3% on the recent InternLM-XC2.5. Moreover, MIA-DPO has a minimal effect on the model's ability to understand single images. |
|
2024-10-24T00:00:00 | 2410.17891 | Scaling Diffusion Language Models via Adaptation from Autoregressive Models | [
"Shansan Gong",
"Shivam Agarwal",
"Yizhe Zhang",
"Jiacheng Ye",
"Lin Zheng",
"Mukai Li",
"Chenxin An",
"Peilin Zhao",
"Wei Bi",
"Jiawei Han",
"Hao Peng",
"Lingpeng Kong"
] | https://github.com/HKUNLP/DiffuLLaMA | Diffusion Language Models (DLMs) have emerged as a promising new paradigm for text generative modeling, potentially addressing limitations of autoregressive (AR) models. However, current DLMs have been studied at a smaller scale compared to their AR counterparts and lack fair comparison on language modeling benchmarks. Additionally, training diffusion models from scratch at scale remains challenging. Given the prevalence of open-source AR language models, we propose adapting these models to build text diffusion models. We demonstrate connections between AR and diffusion modeling objectives and introduce a simple continual pre-training approach for training diffusion models. Through systematic evaluation on language modeling, reasoning, and commonsense benchmarks, we show that we can convert AR models ranging from 127M to 7B parameters (GPT2 and LLaMA) into diffusion models DiffuGPT and DiffuLLaMA, using less than 200B tokens for training. Our experimental results reveal that these models outperform earlier DLMs and are competitive with their AR counterparts. We release a suite of DLMs (with 127M, 355M, and 7B parameters) capable of generating fluent text, performing in-context learning, filling in the middle without prompt re-ordering, and following instructions https://github.com/HKUNLP/DiffuLLaMA. |
2024-10-24T00:00:00 | 2410.17883 | Lightweight Neural App Control | [
"Filippos Christianos",
"Georgios Papoudakis",
"Thomas Coste",
"Jianye Hao",
"Jun Wang",
"Kun Shao"
] | This paper introduces a novel mobile phone control architecture, termed ``app agents", for efficient interactions and controls across various Android apps. The proposed Lightweight Multi-modal App Control (LiMAC) takes as input a textual goal and a sequence of past mobile observations, such as screenshots and corresponding UI trees, to generate precise actions. To address the computational constraints inherent to smartphones, within LiMAC, we introduce a small Action Transformer (AcT) integrated with a fine-tuned vision-language model (VLM) for real-time decision-making and task execution. We evaluate LiMAC on two open-source mobile control datasets, demonstrating the superior performance of our small-form-factor approach against fine-tuned versions of open-source VLMs, such as Florence2 and Qwen2-VL. It also significantly outperforms prompt engineering baselines utilising closed-source foundation models like GPT-4o. More specifically, LiMAC increases the overall action accuracy by up to 19% compared to fine-tuned VLMs, and up to 42% compared to prompt-engineering baselines. |
|
2024-10-24T00:00:00 | 2410.18071 | TP-Eval: Tap Multimodal LLMs' Potential in Evaluation by Customizing Prompts | [
"Yuxuan Xie",
"Tianhua Li",
"Wenqi Shao",
"Kaipeng Zhang"
] | Recently, multimodal large language models (MLLMs) have received much attention for their impressive capabilities. The evaluation of MLLMs is becoming critical to analyzing attributes of MLLMs and providing valuable insights. However, current benchmarks overlook the problem of prompt sensitivity - minor prompt variations may lead to significant performance fluctuations. Thus, inappropriate prompts may obscure the models' capabilities, underestimating the models' performance. Moreover, different models have different preferences for different prompts, and thus, using the same prompt for all models will cause evaluation bias. This paper analyzes this deficiency in existing benchmarks and further introduces a new evaluation framework named TP-Eval, which introduces a prompt customization method to reduce evaluation biases and tap models' potential. TP-Eval will rewrite the original prompts to different customized prompts for different models. In particular, we propose some well-designed modules for prompt customization tailored to the scenario of MLLM evaluation. Extensive experiments demonstrate the effectiveness of our approach to uncovering models' capabilities, and TP-Eval should benefit the community in developing more comprehensive and convincing MLLM evaluation benchmarks. |
|
2024-10-24T00:00:00 | 2410.18013 | Scalable Ranked Preference Optimization for Text-to-Image Generation | [
"Shyamgopal Karthik",
"Huseyin Coskun",
"Zeynep Akata",
"Sergey Tulyakov",
"Jian Ren",
"Anil Kag"
] | Direct Preference Optimization (DPO) has emerged as a powerful approach to align text-to-image (T2I) models with human feedback. Unfortunately, successful application of DPO to T2I models requires a huge amount of resources to collect and label large-scale datasets, e.g., millions of generated paired images annotated with human preferences. In addition, these human preference datasets can get outdated quickly as the rapid improvements of T2I models lead to higher quality images. In this work, we investigate a scalable approach for collecting large-scale and fully synthetic datasets for DPO training. Specifically, the preferences for paired images are generated using a pre-trained reward function, eliminating the need for involving humans in the annotation process, greatly improving the dataset collection efficiency. Moreover, we demonstrate that such datasets allow averaging predictions across multiple models and collecting ranked preferences as opposed to pairwise preferences. Furthermore, we introduce RankDPO to enhance DPO-based methods using the ranking feedback. Applying RankDPO on SDXL and SD3-Medium models with our synthetically generated preference dataset ``Syn-Pic'' improves both prompt-following (on benchmarks like T2I-Compbench, GenEval, and DPG-Bench) and visual quality (through user studies). This pipeline presents a practical and scalable solution to develop better preference datasets to enhance the performance of text-to-image models. |
|
2024-10-24T00:00:00 | 2410.13924 | ARKit LabelMaker: A New Scale for Indoor 3D Scene Understanding | [
"Guangda Ji",
"Silvan Weder",
"Francis Engelmann",
"Marc Pollefeys",
"Hermann Blum"
] | The performance of neural networks scales with both their size and the amount of data they have been trained on. This is shown in both language and image generation. However, this requires scaling-friendly network architectures as well as large-scale datasets. Even though scaling-friendly architectures like transformers have emerged for 3D vision tasks, the GPT-moment of 3D vision remains distant due to the lack of training data. In this paper, we introduce ARKit LabelMaker, the first large-scale, real-world 3D dataset with dense semantic annotations. Specifically, we complement ARKitScenes dataset with dense semantic annotations that are automatically generated at scale. To this end, we extend LabelMaker, a recent automatic annotation pipeline, to serve the needs of large-scale pre-training. This involves extending the pipeline with cutting-edge segmentation models as well as making it robust to the challenges of large-scale processing. Further, we push forward the state-of-the-art performance on ScanNet and ScanNet200 dataset with prevalent 3D semantic segmentation models, demonstrating the efficacy of our generated dataset. |
|
2024-10-24T00:00:00 | 2410.15522 | M-RewardBench: Evaluating Reward Models in Multilingual Settings | [
"Srishti Gureja",
"Lester James V. Miranda",
"Shayekh Bin Islam",
"Rishabh Maheshwary",
"Drishti Sharma",
"Gusti Winata",
"Nathan Lambert",
"Sebastian Ruder",
"Sara Hooker",
"Marzieh Fadaee"
] | Reward models (RMs) have driven the state-of-the-art performance of LLMs today by enabling the integration of human feedback into the language modeling process. However, RMs are primarily trained and evaluated in English, and their capabilities in multilingual settings remain largely understudied. In this work, we conduct a systematic evaluation of several reward models in multilingual settings. We first construct the first-of-its-kind multilingual RM evaluation benchmark, M-RewardBench, consisting of 2.87k preference instances for 23 typologically diverse languages, that tests the chat, safety, reasoning, and translation capabilities of RMs. We then rigorously evaluate a wide range of reward models on M-RewardBench, offering fresh insights into their performance across diverse languages. We identify a significant gap in RMs' performances between English and non-English languages and show that RM preferences can change substantially from one language to another. We also present several findings on how different multilingual aspects impact RM performance. Specifically, we show that the performance of RMs is improved with improved translation quality. Similarly, we demonstrate that the models exhibit better performance for high-resource languages. We release M-RewardBench dataset and the codebase in this study to facilitate a better understanding of RM evaluation in multilingual settings. |
|
2024-10-24T00:00:00 | 2410.18084 | DynamicCity: Large-Scale LiDAR Generation from Dynamic Scenes | [
"Hengwei Bian",
"Lingdong Kong",
"Haozhe Xie",
"Liang Pan",
"Yu Qiao",
"Ziwei Liu"
] | LiDAR scene generation has been developing rapidly recently. However, existing methods primarily focus on generating static and single-frame scenes, overlooking the inherently dynamic nature of real-world driving environments. In this work, we introduce DynamicCity, a novel 4D LiDAR generation framework capable of generating large-scale, high-quality LiDAR scenes that capture the temporal evolution of dynamic environments. DynamicCity mainly consists of two key models. 1) A VAE model for learning HexPlane as the compact 4D representation. Instead of using naive averaging operations, DynamicCity employs a novel Projection Module to effectively compress 4D LiDAR features into six 2D feature maps for HexPlane construction, which significantly enhances HexPlane fitting quality (up to 12.56 mIoU gain). Furthermore, we utilize an Expansion & Squeeze Strategy to reconstruct 3D feature volumes in parallel, which improves both network training efficiency and reconstruction accuracy than naively querying each 3D point (up to 7.05 mIoU gain, 2.06x training speedup, and 70.84% memory reduction). 2) A DiT-based diffusion model for HexPlane generation. To make HexPlane feasible for DiT generation, a Padded Rollout Operation is proposed to reorganize all six feature planes of the HexPlane as a squared 2D feature map. In particular, various conditions could be introduced in the diffusion or sampling process, supporting versatile 4D generation applications, such as trajectory- and command-driven generation, inpainting, and layout-conditioned generation. Extensive experiments on the CarlaSC and Waymo datasets demonstrate that DynamicCity significantly outperforms existing state-of-the-art 4D LiDAR generation methods across multiple metrics. The code will be released to facilitate future research. |
|
2024-10-24T00:00:00 | 2410.13458 | MedINST: Meta Dataset of Biomedical Instructions | [
"Wenhan Han",
"Meng Fang",
"Zihan Zhang",
"Yu Yin",
"Zirui Song",
"Ling Chen",
"Mykola Pechenizkiy",
"Qingyu Chen"
] | The integration of large language model (LLM) techniques in the field of medical analysis has brought about significant advancements, yet the scarcity of large, diverse, and well-annotated datasets remains a major challenge. Medical data and tasks, which vary in format, size, and other parameters, require extensive preprocessing and standardization for effective use in training LLMs. To address these challenges, we introduce MedINST, the Meta Dataset of Biomedical Instructions, a novel multi-domain, multi-task instructional meta-dataset. MedINST comprises 133 biomedical NLP tasks and over 7 million training samples, making it the most comprehensive biomedical instruction dataset to date. Using MedINST as the meta dataset, we curate MedINST32, a challenging benchmark with different task difficulties aiming to evaluate LLMs' generalization ability. We fine-tune several LLMs on MedINST and evaluate on MedINST32, showcasing enhanced cross-task generalization. |
|
2024-10-24T00:00:00 | 2410.17434 | LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding | [
"Xiaoqian Shen",
"Yunyang Xiong",
"Changsheng Zhao",
"Lemeng Wu",
"Jun Chen",
"Chenchen Zhu",
"Zechun Liu",
"Fanyi Xiao",
"Balakrishnan Varadarajan",
"Florian Bordes",
"Zhuang Liu",
"Hu Xu",
"Hyunwoo J. Kim",
"Bilge Soran",
"Raghuraman Krishnamoorthi",
"Mohamed Elhoseiny",
"Vikas Chandra"
] | Multimodal Large Language Models (MLLMs) have shown promising progress in understanding and analyzing video content. However, processing long videos remains a significant challenge constrained by LLM's context size. To address this limitation, we propose LongVU, a spatiotemporal adaptive compression mechanism thats reduces the number of video tokens while preserving visual details of long videos. Our idea is based on leveraging cross-modal query and inter-frame dependencies to adaptively reduce temporal and spatial redundancy in videos. Specifically, we leverage DINOv2 features to remove redundant frames that exhibit high similarity. Then we utilize text-guided cross-modal query for selective frame feature reduction. Further, we perform spatial token reduction across frames based on their temporal dependencies. Our adaptive compression strategy effectively processes a large number of frames with little visual information loss within given context length. Our LongVU consistently surpass existing methods across a variety of video understanding benchmarks, especially on hour-long video understanding tasks such as VideoMME and MLVU. Given a light-weight LLM, our LongVU also scales effectively into a smaller size with state-of-the-art video understanding performance. |
|
2024-10-24T00:00:00 | 2410.17242 | LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias | [
"Haian Jin",
"Hanwen Jiang",
"Hao Tan",
"Kai Zhang",
"Sai Bi",
"Tianyuan Zhang",
"Fujun Luan",
"Noah Snavely",
"Zexiang Xu"
] | We propose the Large View Synthesis Model (LVSM), a novel transformer-based approach for scalable and generalizable novel view synthesis from sparse-view inputs. We introduce two architectures: (1) an encoder-decoder LVSM, which encodes input image tokens into a fixed number of 1D latent tokens, functioning as a fully learned scene representation, and decodes novel-view images from them; and (2) a decoder-only LVSM, which directly maps input images to novel-view outputs, completely eliminating intermediate scene representations. Both models bypass the 3D inductive biases used in previous methods -- from 3D representations (e.g., NeRF, 3DGS) to network designs (e.g., epipolar projections, plane sweeps) -- addressing novel view synthesis with a fully data-driven approach. While the encoder-decoder model offers faster inference due to its independent latent representation, the decoder-only LVSM achieves superior quality, scalability, and zero-shot generalization, outperforming previous state-of-the-art methods by 1.5 to 3.5 dB PSNR. Comprehensive evaluations across multiple datasets demonstrate that both LVSM variants achieve state-of-the-art novel view synthesis quality. Notably, our models surpass all previous methods even with reduced computational resources (1-2 GPUs). Please see our website for more details: https://haian-jin.github.io/projects/LVSM/ . |
|
2024-10-24T00:00:00 | 2410.13816 | Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance | [
"Mitsuhiko Nakamoto",
"Oier Mees",
"Aviral Kumar",
"Sergey Levine"
] | Large, general-purpose robotic policies trained on diverse demonstration datasets have been shown to be remarkably effective both for controlling a variety of robots in a range of different scenes, and for acquiring broad repertoires of manipulation skills. However, the data that such policies are trained on is generally of mixed quality -- not only are human-collected demonstrations unlikely to perform the task perfectly, but the larger the dataset is, the harder it is to curate only the highest quality examples. It also remains unclear how optimal data from one embodiment is for training on another embodiment. In this paper, we present a general and broadly applicable approach that enhances the performance of such generalist robot policies at deployment time by re-ranking their actions according to a value function learned via offline RL. This approach, which we call Value-Guided Policy Steering (V-GPS), is compatible with a wide range of different generalist policies, without needing to fine-tune or even access the weights of the policy. We show that the same value function can improve the performance of five different state-of-the-art policies with different architectures, even though they were trained on distinct datasets, attaining consistent performance improvement on multiple robotic platforms across a total of 12 tasks. Code and videos can be found at: https://nakamotoo.github.io/V-GPS |
|
2024-10-25T00:00:00 | 2410.18785 | Should We Really Edit Language Models? On the Evaluation of Edited Language Models | [
"Qi Li",
"Xiang Liu",
"Zhenheng Tang",
"Peijie Dong",
"Zeyu Li",
"Xinglin Pan",
"Xiaowen Chu"
] | https://github.com/lqinfdim/EditingEvaluation | Model editing has become an increasingly popular alternative for efficiently updating knowledge within language models. Current methods mainly focus on reliability, generalization, and locality, with many methods excelling across these criteria. Some recent works disclose the pitfalls of these editing methods such as knowledge distortion or conflict. However, the general abilities of post-edited language models remain unexplored. In this paper, we perform a comprehensive evaluation on various editing methods and different language models, and have following findings. (1) Existing editing methods lead to inevitable performance deterioration on general benchmarks, indicating that existing editing methods maintain the general abilities of the model within only a few dozen edits. When the number of edits is slightly large, the intrinsic knowledge structure of the model is disrupted or even completely damaged. (2) Instruction-tuned models are more robust to editing, showing less performance drop on general knowledge after editing. (3) Language model with large scale is more resistant to editing compared to small model. (4) The safety of the edited model, is significantly weakened, even for those safety-aligned models. Our findings indicate that current editing methods are only suitable for small-scale knowledge updates within language models, which motivates further research on more practical and reliable editing methods. The details of code and reproduction can be found in https://github.com/lqinfdim/EditingEvaluation. |
2024-10-25T00:00:00 | 2410.18958 | Stable Consistency Tuning: Understanding and Improving Consistency Models | [
"Fu-Yun Wang",
"Zhengyang Geng",
"Hongsheng Li"
] | Diffusion models achieve superior generation quality but suffer from slow generation speed due to the iterative nature of denoising. In contrast, consistency models, a new generative family, achieve competitive performance with significantly faster sampling. These models are trained either through consistency distillation, which leverages pretrained diffusion models, or consistency training/tuning directly from raw data. In this work, we propose a novel framework for understanding consistency models by modeling the denoising process of the diffusion model as a Markov Decision Process (MDP) and framing consistency model training as the value estimation through Temporal Difference~(TD) Learning. More importantly, this framework allows us to analyze the limitations of current consistency training/tuning strategies. Built upon Easy Consistency Tuning (ECT), we propose Stable Consistency Tuning (SCT), which incorporates variance-reduced learning using the score identity. SCT leads to significant performance improvements on benchmarks such as CIFAR-10 and ImageNet-64. On ImageNet-64, SCT achieves 1-step FID 2.42 and 2-step FID 1.55, a new SoTA for consistency models. |
|
2024-10-25T00:00:00 | 2410.18975 | Unbounded: A Generative Infinite Game of Character Life Simulation | [
"Jialu Li",
"Yuanzhen Li",
"Neal Wadhwa",
"Yael Pritch",
"David E. Jacobs",
"Michael Rubinstein",
"Mohit Bansal",
"Nataniel Ruiz"
] | We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models. Inspired by James P. Carse's distinction between finite and infinite games, we leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models. Specifically, Unbounded draws inspiration from sandbox life simulations and allows you to interact with your autonomous virtual character in a virtual world by feeding, playing with and guiding it - with open-ended mechanics generated by an LLM, some of which can be emergent. In order to develop Unbounded, we propose technical innovations in both the LLM and visual generation domains. Specifically, we present: (1) a specialized, distilled large language model (LLM) that dynamically generates game mechanics, narratives, and character interactions in real-time, and (2) a new dynamic regional image prompt Adapter (IP-Adapter) for vision models that ensures consistent yet flexible visual generation of a character across multiple environments. We evaluate our system through both qualitative and quantitative analysis, showing significant improvements in character life simulation, user instruction following, narrative coherence, and visual consistency for both characters and the environments compared to traditional related approaches. |
|
2024-10-25T00:00:00 | 2410.18505 | CCI3.0-HQ: a large-scale Chinese dataset of high quality designed for pre-training large language models | [
"Liangdong Wang",
"Bo-Wen Zhang",
"Chengwei Wu",
"Hanyu Zhao",
"Xiaofeng Shi",
"Shuhao Gu",
"Jijie Li",
"Quanyue Ma",
"TengFei Pan",
"Guang Liu"
] | We present CCI3.0-HQ (https://huggingface.co/datasets/BAAI/CCI3-HQ), a high-quality 500GB subset of the Chinese Corpora Internet 3.0 (CCI3.0)(https://huggingface.co/datasets/BAAI/CCI3-Data), developed using a novel two-stage hybrid filtering pipeline that significantly enhances data quality. To evaluate its effectiveness, we trained a 0.5B parameter model from scratch on 100B tokens across various datasets, achieving superior performance on 10 benchmarks in a zero-shot setting compared to CCI3.0, SkyPile, and WanjuanV1. The high-quality filtering process effectively distills the capabilities of the Qwen2-72B-instruct model into a compact 0.5B model, attaining optimal F1 scores for Chinese web data classification. We believe this open-access dataset will facilitate broader access to high-quality language models. |
|
2024-10-25T00:00:00 | 2410.17243 | Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss | [
"Zesen Cheng",
"Hang Zhang",
"Kehan Li",
"Sicong Leng",
"Zhiqiang Hu",
"Fei Wu",
"Deli Zhao",
"Xin Li",
"Lidong Bing"
] | Contrastive loss is a powerful approach for representation learning, where larger batch sizes enhance performance by providing more negative samples to better distinguish between similar and dissimilar data. However, scaling batch sizes is constrained by the quadratic growth in GPU memory consumption, primarily due to the full instantiation of the similarity matrix. To address this, we propose a tile-based computation strategy that partitions the contrastive loss calculation into arbitrary small blocks, avoiding full materialization of the similarity matrix. Furthermore, we introduce a multi-level tiling strategy to leverage the hierarchical structure of distributed systems, employing ring-based communication at the GPU level to optimize synchronization and fused kernels at the CUDA core level to reduce I/O overhead. Experimental results show that the proposed method scales batch sizes to unprecedented levels. For instance, it enables contrastive training of a CLIP-ViT-L/14 model with a batch size of 4M or 12M using 8 or 32 A800 80GB without sacrificing any accuracy. Compared to SOTA memory-efficient solutions, it achieves a two-order-of-magnitude reduction in memory while maintaining comparable speed. The code will be made publicly available. |
|
2024-10-25T00:00:00 | 2410.18978 | Framer: Interactive Frame Interpolation | [
"Wen Wang",
"Qiuyu Wang",
"Kecheng Zheng",
"Hao Ouyang",
"Zhekai Chen",
"Biao Gong",
"Hao Chen",
"Yujun Shen",
"Chunhua Shen"
] | We propose Framer for interactive frame interpolation, which targets producing smoothly transitioning frames between two images as per user creativity. Concretely, besides taking the start and end frames as inputs, our approach supports customizing the transition process by tailoring the trajectory of some selected keypoints. Such a design enjoys two clear benefits. First, incorporating human interaction mitigates the issue arising from numerous possibilities of transforming one image to another, and in turn enables finer control of local motions. Second, as the most basic form of interaction, keypoints help establish the correspondence across frames, enhancing the model to handle challenging cases (e.g., objects on the start and end frames are of different shapes and styles). It is noteworthy that our system also offers an "autopilot" mode, where we introduce a module to estimate the keypoints and refine the trajectory automatically, to simplify the usage in practice. Extensive experimental results demonstrate the appealing performance of Framer on various applications, such as image morphing, time-lapse video generation, cartoon interpolation, etc. The code, the model, and the interface will be released to facilitate further research. |
|
2024-10-25T00:00:00 | 2410.18745 | Why Does the Effective Context Length of LLMs Fall Short? | [
"Chenxin An",
"Jun Zhang",
"Ming Zhong",
"Lei Li",
"Shansan Gong",
"Yao Luo",
"Jingjing Xu",
"Lingpeng Kong"
] | Advancements in distributed training and efficient attention mechanisms have significantly expanded the context window sizes of large language models (LLMs). However, recent work reveals that the effective context lengths of open-source LLMs often fall short, typically not exceeding half of their training lengths. In this work, we attribute this limitation to the left-skewed frequency distribution of relative positions formed in LLMs pretraining and post-training stages, which impedes their ability to effectively gather distant information. To address this challenge, we introduce ShifTed Rotray position embeddING (STRING). STRING shifts well-trained positions to overwrite the original ineffective positions during inference, enhancing performance within their existing training lengths. Experimental results show that without additional training, STRING dramatically improves the performance of the latest large-scale models, such as Llama3.1 70B and Qwen2 72B, by over 10 points on popular long-context benchmarks RULER and InfiniteBench, establishing new state-of-the-art results for open-source LLMs. Compared to commercial models, Llama 3.1 70B with \method even achieves better performance than GPT-4-128K and clearly surpasses Claude 2 and Kimi-chat. |
|
2024-10-25T00:00:00 | 2410.18451 | Skywork-Reward: Bag of Tricks for Reward Modeling in LLMs | [
"Chris Yuhao Liu",
"Liang Zeng",
"Jiacai Liu",
"Rui Yan",
"Jujie He",
"Chaojie Wang",
"Shuicheng Yan",
"Yang Liu",
"Yahui Zhou"
] | In this report, we introduce a collection of methods to enhance reward modeling for LLMs, focusing specifically on data-centric techniques. We propose effective data selection and filtering strategies for curating high-quality open-source preference datasets, culminating in the Skywork-Reward data collection, which contains only 80K preference pairs -- significantly smaller than existing datasets. Using this curated dataset, we developed the Skywork-Reward model series -- Skywork-Reward-Gemma-27B and Skywork-Reward-Llama-3.1-8B -- with the former currently holding the top position on the RewardBench leaderboard. Notably, our techniques and datasets have directly enhanced the performance of many top-ranked models on RewardBench, highlighting the practical impact of our contributions in real-world preference learning applications. |
|
2024-10-25T00:00:00 | 2410.18693 | Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch | [
"Yuyang Ding",
"Xinyu Shi",
"Xiaobo Liang",
"Juntao Li",
"Qiaoming Zhu",
"Min Zhang"
] | The availability of high-quality data is one of the most important factors in improving the reasoning capability of LLMs. Existing works have demonstrated the effectiveness of creating more instruction data from seed questions or knowledge bases. Recent research indicates that continually scaling up data synthesis from strong models (e.g., GPT-4) can further elicit reasoning performance. Though promising, the open-sourced community still lacks high-quality data at scale and scalable data synthesis methods with affordable costs. To address this, we introduce ScaleQuest, a scalable and novel data synthesis method that utilizes "small-size" (e.g., 7B) open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints. With the efficient ScaleQuest, we automatically constructed a mathematical reasoning dataset consisting of 1 million problem-solution pairs, which are more effective than existing open-sourced datasets. It can universally increase the performance of mainstream open-source models (i.e., Mistral, Llama3, DeepSeekMath, and Qwen2-Math) by achieving 29.2% to 46.4% gains on MATH. Notably, simply fine-tuning the Qwen2-Math-7B-Base model with our dataset can even surpass Qwen2-Math-7B-Instruct, a strong and well-aligned model on closed-source data, and proprietary models such as GPT-4-Turbo and Claude-3.5 Sonnet. |
|
2024-10-25T00:00:00 | 2410.18533 | LOGO -- Long cOntext aliGnment via efficient preference Optimization | [
"Zecheng Tang",
"Zechen Sun",
"Juntao Li",
"Qiaoming Zhu",
"Min Zhang"
] | Long-context models(LCMs) have shown great potential in processing long input sequences(even more than 100M tokens) conveniently and effectively. With significant progress, recent research has pointed out that LCMs can accurately locate token-level salient information within the context. Yet, the generation performance of these LCMs is far from satisfactory and might result in misaligned responses, such as hallucinations. To enhance the generation capability of LCMs, existing works have investigated the effects of data size and quality for both pre-training and instruction tuning. Though achieving meaningful improvement, previous methods fall short in either effectiveness or efficiency. In this paper, we introduce LOGO(Long cOntext aliGnment via efficient preference Optimization), a training strategy that first introduces preference optimization for long-context alignment. To overcome the GPU memory-bound issue caused by the long sequence, LOGO employs a reference-free preference optimization strategy and adopts a position synthesis method to construct the training data. By training with only 0.3B data on a single 8timesA800 GPU machine for 16 hours, LOGO allows the Llama-3-8B-Instruct-80K model to achieve comparable performance with GPT-4 in real-world long-context tasks while preserving the model's original capabilities on other tasks, e.g., language modeling and MMLU. Moreover, LOGO can extend the model's context window size while enhancing its generation performance. |
|
2024-10-25T00:00:00 | 2410.18775 | Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances | [
"Shilin Lu",
"Zihan Zhou",
"Jiayou Lu",
"Yuanzhi Zhu",
"Adams Wai-Kin Kong"
] | https://github.com/Shilin-LU/VINE | Current image watermarking methods are vulnerable to advanced image editing techniques enabled by large-scale text-to-image models. These models can distort embedded watermarks during editing, posing significant challenges to copyright protection. In this work, we introduce W-Bench, the first comprehensive benchmark designed to evaluate the robustness of watermarking methods against a wide range of image editing techniques, including image regeneration, global editing, local editing, and image-to-video generation. Through extensive evaluations of eleven representative watermarking methods against prevalent editing techniques, we demonstrate that most methods fail to detect watermarks after such edits. To address this limitation, we propose VINE, a watermarking method that significantly enhances robustness against various image editing techniques while maintaining high image quality. Our approach involves two key innovations: (1) we analyze the frequency characteristics of image editing and identify that blurring distortions exhibit similar frequency properties, which allows us to use them as surrogate attacks during training to bolster watermark robustness; (2) we leverage a large-scale pretrained diffusion model SDXL-Turbo, adapting it for the watermarking task to achieve more imperceptible and robust watermark embedding. Experimental results show that our method achieves outstanding watermarking performance under various image editing techniques, outperforming existing methods in both image quality and robustness. Code is available at https://github.com/Shilin-LU/VINE. |
2024-10-25T00:00:00 | 2410.18234 | Multi-Draft Speculative Sampling: Canonical Architectures and Theoretical Limits | [
"Ashish Khisti",
"M. Reza Ebrahimi",
"Hassan Dbouk",
"Arash Behboodi",
"Roland Memisevic",
"Christos Louizos"
] | We consider multi-draft speculative sampling, where the proposal sequences are sampled independently from different draft models. At each step, a token-level draft selection scheme takes a list of valid tokens as input and produces an output token whose distribution matches that of the target model. Previous works have demonstrated that the optimal scheme (which maximizes the probability of accepting one of the input tokens) can be cast as a solution to a linear program. In this work we show that the optimal scheme can be decomposed into a two-step solution: in the first step an importance sampling (IS) type scheme is used to select one intermediate token; in the second step (single-draft) speculative sampling is applied to generate the output token. For the case of two identical draft models we further 1) establish a necessary and sufficient condition on the distributions of the target and draft models for the acceptance probability to equal one and 2) provide an explicit expression for the optimal acceptance probability. Our theoretical analysis also motives a new class of token-level selection scheme based on weighted importance sampling. Our experimental results demonstrate consistent improvements in the achievable block efficiency and token rates over baseline schemes in a number of scenarios. |
|
2024-10-25T00:00:00 | 2410.16251 | Can Knowledge Editing Really Correct Hallucinations? | [
"Baixiang Huang",
"Canyu Chen",
"Xiongxiao Xu",
"Ali Payani",
"Kai Shu"
] | Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual information in generated content, despite their superior capacities across tasks. Meanwhile, knowledge editing has been developed as a new popular paradigm to correct the erroneous factual knowledge encoded in LLMs with the advantage of avoiding retraining from scratch. However, one common issue of existing evaluation datasets for knowledge editing is that they do not ensure LLMs actually generate hallucinated answers to the evaluation questions before editing. When LLMs are evaluated on such datasets after being edited by different techniques, it is hard to directly adopt the performance to assess the effectiveness of different knowledge editing methods in correcting hallucinations. Thus, the fundamental question remains insufficiently validated: Can knowledge editing really correct hallucinations in LLMs? We proposed HalluEditBench to holistically benchmark knowledge editing methods in correcting real-world hallucinations. First, we rigorously construct a massive hallucination dataset with 9 domains, 26 topics and more than 6,000 hallucinations. Then, we assess the performance of knowledge editing methods in a holistic way on five dimensions including Efficacy, Generalization, Portability, Locality, and Robustness. Through HalluEditBench, we have provided new insights into the potentials and limitations of different knowledge editing methods in correcting hallucinations, which could inspire future improvements and facilitate the progress in the field of knowledge editing. |
|
2024-10-25T00:00:00 | 2410.18798 | Distill Visual Chart Reasoning Ability from LLMs to MLLMs | [
"Wei He",
"Zhiheng Xi",
"Wanxu Zhao",
"Xiaoran Fan",
"Yiwen Ding",
"Zifei Shan",
"Tao Gui",
"Qi Zhang",
"Xuanjing Huang"
] | https://github.com/hewei2001/ReachQA | Solving complex chart Q&A tasks requires advanced visual reasoning abilities in multimodal large language models (MLLMs). Recent studies highlight that these abilities consist of two main parts: recognizing key information from visual inputs and conducting reasoning over it. Thus, a promising approach to enhance MLLMs is to construct relevant training data focusing on the two aspects. However, collecting and annotating complex charts and questions is costly and time-consuming, and ensuring the quality of annotated answers remains a challenge. In this paper, we propose Code-as-Intermediary Translation (CIT), a cost-effective, efficient and easily scalable data synthesis method for distilling visual reasoning abilities from LLMs to MLLMs. The code serves as an intermediary that translates visual chart representations into textual representations, enabling LLMs to understand cross-modal information. Specifically, we employ text-based synthesizing techniques to construct chart-plotting code and produce ReachQA, a dataset containing 3k reasoning-intensive charts and 20k Q&A pairs to enhance both recognition and reasoning abilities. Experiments show that when fine-tuned with our data, models not only perform well on chart-related benchmarks, but also demonstrate improved multimodal reasoning abilities on general mathematical benchmarks like MathVista. The code and dataset are publicly available at https://github.com/hewei2001/ReachQA. |
2024-10-25T00:00:00 | 2410.18977 | MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms | [
"Ling-Hao Chen",
"Wenxun Dai",
"Xuan Ju",
"Shunlin Lu",
"Lei Zhang"
] | This research delves into the problem of interactive editing of human motion generation. Previous motion diffusion models lack explicit modeling of the word-level text-motion correspondence and good explainability, hence restricting their fine-grained editing ability. To address this issue, we propose an attention-based motion diffusion model, namely MotionCLR, with CLeaR modeling of attention mechanisms. Technically, MotionCLR models the in-modality and cross-modality interactions with self-attention and cross-attention, respectively. More specifically, the self-attention mechanism aims to measure the sequential similarity between frames and impacts the order of motion features. By contrast, the cross-attention mechanism works to find the fine-grained word-sequence correspondence and activate the corresponding timesteps in the motion sequence. Based on these key properties, we develop a versatile set of simple yet effective motion editing methods via manipulating attention maps, such as motion (de-)emphasizing, in-place motion replacement, and example-based motion generation, etc. For further verification of the explainability of the attention mechanism, we additionally explore the potential of action-counting and grounded motion generation ability via attention maps. Our experimental results show that our method enjoys good generation and editing ability with good explainability. |
|
2024-10-25T00:00:00 | 2410.18362 | WAFFLE: Multi-Modal Model for Automated Front-End Development | [
"Shanchao Liang",
"Nan Jiang",
"Shangshu Qian",
"Lin Tan"
] | Web development involves turning UI designs into functional webpages, which can be difficult for both beginners and experienced developers due to the complexity of HTML's hierarchical structures and styles. While Large Language Models (LLMs) have shown promise in generating source code, two major challenges persist in UI-to-HTML code generation: (1) effectively representing HTML's hierarchical structure for LLMs, and (2) bridging the gap between the visual nature of UI designs and the text-based format of HTML code. To tackle these challenges, we introduce Waffle, a new fine-tuning strategy that uses a structure-aware attention mechanism to improve LLMs' understanding of HTML's structure and a contrastive fine-tuning approach to align LLMs' understanding of UI images and HTML code. Models fine-tuned with Waffle show up to 9.00 pp (percentage point) higher HTML match, 0.0982 higher CW-SSIM, 32.99 higher CLIP, and 27.12 pp higher LLEM on our new benchmark WebSight-Test and an existing benchmark Design2Code, outperforming current fine-tuning methods. |
|
2024-10-25T00:00:00 | 2410.15580 | Language Models are Symbolic Learners in Arithmetic | [
"Chunyuan Deng",
"Zhiqi Li",
"Roy Xie",
"Ruidi Chang",
"Hanjie Chen"
] | Large Language Models (LLMs) are thought to struggle with arithmetic learning due to the inherent differences between language modeling and numerical computation, but concrete evidence has been lacking. This work responds to this claim through a two-side experiment. We first investigate whether LLMs leverage partial products during arithmetic learning. We find that although LLMs can identify some partial products after learning, they fail to leverage them for arithmetic tasks, conversely. We then explore how LLMs approach arithmetic symbolically by breaking tasks into subgroups, hypothesizing that difficulties arise from subgroup complexity and selection. Our results show that when subgroup complexity is fixed, LLMs treat a collection of different arithmetic operations similarly. By analyzing position-level accuracy across different training sizes, we further observe that it follows a U-shaped pattern: LLMs quickly learn the easiest patterns at the first and last positions, while progressively learning the more difficult patterns in the middle positions. This suggests that LLMs select subgroup following an easy-to-hard paradigm during learning. Our work confirms that LLMs are pure symbolic learners in arithmetic tasks and underscores the importance of understanding them deeply through subgroup-level quantification. |
|
2024-10-25T00:00:00 | 2410.17779 | ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning | [
"Zhiwei Hao",
"Jianyuan Guo",
"Li Shen",
"Yong Luo",
"Han Hu",
"Yonggang Wen"
] | https://github.com/Hao840/ADEM-VL | Recent advancements in multimodal fusion have witnessed the remarkable success of vision-language (VL) models, which excel in various multimodal applications such as image captioning and visual question answering. However, building VL models requires substantial hardware resources, where efficiency is restricted by two key factors: the extended input sequence of the language model with vision features demands more computational operations, and a large number of additional learnable parameters increase memory complexity. These challenges significantly restrict the broader applicability of such models. To bridge this gap, we propose ADEM-VL, an efficient vision-language method that tunes VL models based on pretrained large language models (LLMs) by adopting a parameter-free cross-attention mechanism for similarity measurements in multimodal fusion. This approach only requires embedding vision features into the language space, significantly reducing the number of trainable parameters and accelerating both training and inference speeds. To enhance representation learning in fusion module, we introduce an efficient multiscale feature generation scheme that requires only a single forward pass through the vision encoder. Moreover, we propose an adaptive fusion scheme that dynamically discards less relevant visual information for each text token based on its attention score. This ensures that the fusion process prioritizes the most pertinent visual features. With experiments on various tasks including visual question answering, image captioning, and instruction-following, we demonstrate that our framework outperforms existing approaches. Specifically, our method surpasses existing methods by an average accuracy of 0.77% on ScienceQA dataset, with reduced training and inference latency, demonstrating the superiority of our framework. The code is available at https://github.com/Hao840/ADEM-VL. |
2024-10-25T00:00:00 | 2410.18976 | CAMEL-Bench: A Comprehensive Arabic LMM Benchmark | [
"Sara Ghaboura",
"Ahmed Heakl",
"Omkar Thawakar",
"Ali Alharthi",
"Ines Riahi",
"Abduljalil Saif",
"Jorma Laaksonen",
"Fahad S. Khan",
"Salman Khan",
"Rao M. Anwer"
] | Recent years have witnessed a significant interest in developing large multimodal models (LMMs) capable of performing various visual reasoning and understanding tasks. This has led to the introduction of multiple LMM benchmarks to evaluate LMMs on different tasks. However, most existing LMM evaluation benchmarks are predominantly English-centric. In this work, we develop a comprehensive LMM evaluation benchmark for the Arabic language to represent a large population of over 400 million speakers. The proposed benchmark, named CAMEL-Bench, comprises eight diverse domains and 38 sub-domains including, multi-image understanding, complex visual perception, handwritten document understanding, video understanding, medical imaging, plant diseases, and remote sensing-based land use understanding to evaluate broad scenario generalizability. Our CAMEL-Bench comprises around 29,036 questions that are filtered from a larger pool of samples, where the quality is manually verified by native speakers to ensure reliable model assessment. We conduct evaluations of both closed-source, including GPT-4 series, and open-source LMMs. Our analysis reveals the need for substantial improvement, especially among the best open-source models, with even the closed-source GPT-4o achieving an overall score of 62%. Our benchmark and evaluation scripts are open-sourced. |
|
2024-10-25T00:00:00 | 2410.17897 | Value Residual Learning For Alleviating Attention Concentration In Transformers | [
"Zhanchao Zhou",
"Tianyi Wu",
"Zhiyun Jiang",
"Zhenzhong Lan"
] | Transformers can capture long-range dependencies using self-attention, allowing tokens to attend to all others directly. However, stacking multiple attention layers leads to attention concentration. One natural way to address this issue is to use cross-layer attention, allowing information from earlier layers to be directly accessible to later layers. However, this approach is computationally expensive. To address this problem, we propose Transformer with residual value (ResFormer) which approximates cross-layer attention through adding a residual connection from the values of the the first layer to all subsequent layers. Based on this method, one variant is the Transformer with single layer value (SVFormer), where all layers share the same value embedding from first layer, reducing the KV cache by nearly 50%. Comprehensive empirical evidence demonstrates that ResFormer mitigates attention concentration problem in deeper layers and enhances representation across most layers, outperforming the vanilla Transformer, DenseFormer, and NeuTRENO in training error as well as downstream tasks. SVFormer trains significantly faster than the vanilla Transformer and performs better than other methods like GQA and CLA, with performance influenced by sequence length and cumulative learning rate. |
|
2024-10-25T00:00:00 | 2410.18572 | Taipan: Efficient and Expressive State Space Language Models with Selective Attention | [
"Chien Van Nguyen",
"Huy Huu Nguyen",
"Thang M. Pham",
"Ruiyi Zhang",
"Hanieh Deilamsalehy",
"Puneet Mathur",
"Ryan A. Rossi",
"Trung Bui",
"Viet Dac Lai",
"Franck Dernoncourt",
"Thien Huu Nguyen"
] | Efficient long-context language modeling remains a significant challenge in Natural Language Processing (NLP). While Transformers dominate language tasks, they struggle with long sequences due to quadratic computational complexity in training and linearly scaling memory costs during inference. Recent State Space Models (SSMs) such as Mamba offer alternatives with constant memory usage, but they underperform in tasks requiring extensive in-context retrieval. We introduce Taipan, a novel hybrid architecture that combines Mamba-2 with Selective Attention Layers (SALs). These SALs identify tokens requiring long-range interactions, remove less important features, and then augment their representations using the attention module. This approach balances Mamba's efficiency with Transformer-like performance in memory-intensive tasks. By constraining the attention budget, Taipan extends accurate predictions to context lengths of up to 1 million tokens while preserving computational efficiency. Our experiments demonstrate Taipan's superior performance across various scales and tasks, offering a promising solution for efficient long-context language modeling. |
|
2024-10-25T00:00:00 | 2410.18647 | Data Scaling Laws in Imitation Learning for Robotic Manipulation | [
"Fanqi Lin",
"Yingdong Hu",
"Pingyue Sheng",
"Chuan Wen",
"Jiacheng You",
"Yang Gao"
] | Data scaling has revolutionized fields like natural language processing and computer vision, providing models with remarkable generalization capabilities. In this paper, we investigate whether similar data scaling laws exist in robotics, particularly in robotic manipulation, and whether appropriate data scaling can yield single-task robot policies that can be deployed zero-shot for any object within the same category in any environment. To this end, we conduct a comprehensive empirical study on data scaling in imitation learning. By collecting data across numerous environments and objects, we study how a policy's generalization performance changes with the number of training environments, objects, and demonstrations. Throughout our research, we collect over 40,000 demonstrations and execute more than 15,000 real-world robot rollouts under a rigorous evaluation protocol. Our findings reveal several intriguing results: the generalization performance of the policy follows a roughly power-law relationship with the number of environments and objects. The diversity of environments and objects is far more important than the absolute number of demonstrations; once the number of demonstrations per environment or object reaches a certain threshold, additional demonstrations have minimal effect. Based on these insights, we propose an efficient data collection strategy. With four data collectors working for one afternoon, we collect sufficient data to enable the policies for two tasks to achieve approximately 90% success rates in novel environments with unseen objects. |
|
2024-10-25T00:00:00 | 2410.18538 | SMITE: Segment Me In TimE | [
"Amirhossein Alimohammadi",
"Sauradip Nag",
"Saeid Asgari Taghanaki",
"Andrea Tagliasacchi",
"Ghassan Hamarneh",
"Ali Mahdavi Amiri"
] | Segmenting an object in a video presents significant challenges. Each pixel must be accurately labelled, and these labels must remain consistent across frames. The difficulty increases when the segmentation is with arbitrary granularity, meaning the number of segments can vary arbitrarily, and masks are defined based on only one or a few sample images. In this paper, we address this issue by employing a pre-trained text to image diffusion model supplemented with an additional tracking mechanism. We demonstrate that our approach can effectively manage various segmentation scenarios and outperforms state-of-the-art alternatives. |
|
2024-10-25T00:00:00 | 2410.15999 | Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering | [
"Yu Zhao",
"Alessio Devoto",
"Giwon Hong",
"Xiaotang Du",
"Aryo Pradipta Gema",
"Hongru Wang",
"Kam-Fai Wong",
"Pasquale Minervini"
] | Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context -- this phenomenon, known as context-memory knowledge conflicts, can lead to undesirable model behaviour, such as reliance on outdated or incorrect information. Analysing the internal activations of LLMs, we find that they can internally register the signals of knowledge conflict at mid-layers. Such signals allow us to detect whether a knowledge conflict occurs and use inference-time intervention strategies to resolve it. In this work, we propose SpARE, a training-free representation engineering method that uses pre-trained sparse auto-encoders (SAEs) to control the knowledge selection behaviour of LLMs. SpARE identifies the functional features that control the knowledge selection behaviours and applies them to edit the internal activations of LLMs at inference time. Our experimental results show that SpARE can effectively control the usage of either knowledge source to resolve knowledge conflict in open-domain question-answering tasks, surpassing existing representation engineering methods (+10%) as well as contrastive decoding methods (+15%). |
|
2024-10-25T00:00:00 | 2410.18860 | DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations | [
"Aryo Pradipta Gema",
"Chen Jin",
"Ahmed Abdulaal",
"Tom Diethe",
"Philip Teare",
"Beatrice Alex",
"Pasquale Minervini",
"Amrutha Saseendran"
] | Large Language Models (LLMs) often hallucinate, producing unfaithful or factually incorrect outputs by misrepresenting the provided context or incorrectly recalling internal knowledge. Recent studies have identified specific attention heads within the Transformer architecture, known as retrieval heads, responsible for extracting relevant contextual information. We hypothesise that masking these retrieval heads can induce hallucinations and that contrasting the outputs of the base LLM and the masked LLM can reduce hallucinations. To this end, we propose Decoding by Contrasting Retrieval Heads (DeCoRe), a novel training-free decoding strategy that amplifies information found in the context and model parameters. DeCoRe mitigates potentially hallucinated responses by dynamically contrasting the outputs of the base LLM and the masked LLM, using conditional entropy as a guide. Our extensive experiments confirm that DeCoRe significantly improves performance on tasks requiring high contextual faithfulness, such as summarisation (XSum by 18.6%), instruction following (MemoTrap by 10.9%), and open-book question answering (NQ-Open by 2.4% and NQ-Swap by 5.5%). |
|
2024-10-25T00:00:00 | 2410.18441 | The Nature of Mathematical Modeling and Probabilistic Optimization Engineering in Generative AI | [
"Fulu Li"
] | In this paper, we give an in-depth analysis on the mathematical problem formulations and the probabilistic optimization explorations for some of the key components in Transformer model [33] in the field of generative AI. We explore and discuss some potential further enhancement for current state of the art methods for some key underlying technologies of generative AI models from algorithmic and probabilistic optimization perspective. In particular, we present an optimal solution for sub-word encoding (SWE) based on similar initial settings as that of byte-pair encoding (BPE) algorithm in [9] with similar objectives as that of WordPiece approach in [28, 31] to maximize the likelihood of the training data. We also present cross entropy optimization method to optimize hyperparameters for word2vec model [17]. In addition, we propose a factored combination of rotary positional encoding (RoPE) [32] and attention with linear biases (ALiBi) [23] with a harmonic series. We also present a probabilistic FlashAttention [6, 7] (PrFlashAttention) method with a probability distribution over block distances in the matrix to decide which block is likely to participate in a given round of attention computation while maintaining the lower triangle shape of the tensor for autoregressive language models by re-shaping the tensors. Finally, we present staircase adaptive quantization (SAQ) of key-value (KV) cache for multi-query attention (MQA) based on the framework presented in [16] to have gradual quantization degradation while achieving reasonable model quality and cost savings. |
|
2024-10-25T00:00:00 | 2410.18252 | Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models | [
"Michael Noukhovitch",
"Shengyi Huang",
"Sophie Xhonneux",
"Arian Hosseini",
"Rishabh Agarwal",
"Aaron Courville"
] | The dominant paradigm for RLHF is online and on-policy RL: synchronously generating from the large language model (LLM) policy, labelling with a reward model, and learning using feedback on the LLM's own outputs. While performant, this paradigm is computationally inefficient. Inspired by classical deep RL literature, we propose separating generation and learning in RLHF. This enables asynchronous generation of new samples while simultaneously training on old samples, leading to faster training and more compute-optimal scaling. However, asynchronous training relies on an underexplored regime, online but off-policy RLHF: learning on samples from previous iterations of our model. To understand the challenges in this regime, we investigate a fundamental question: how much off-policyness can we tolerate for asynchronous training to speed up learning but maintain performance? Among several RLHF algorithms we tested, we find that online DPO is most robust to off-policy data, and robustness increases with the scale of the policy model. We study further compute optimizations for asynchronous RLHF but find that they come at a performance cost, giving rise to a trade-off. Finally, we verify the scalability of asynchronous RLHF by training LLaMA 3.1 8B on an instruction-following task 40% faster than a synchronous run while matching final performance. |
|
2024-10-25T00:00:00 | 2410.16429 | Pantograph: A Machine-to-Machine Interaction Interface for Advanced Theorem Proving, High Level Reasoning, and Data Extraction in Lean 4 | [
"Leni Aniva",
"Chuyue Sun",
"Brando Miranda",
"Clark Barrett",
"Sanmi Koyejo"
] | Machine-assisted theorem proving refers to the process of conducting structured reasoning to automatically generate proofs for mathematical theorems. Recently, there has been a surge of interest in using machine learning models in conjunction with proof assistants to perform this task. In this paper, we introduce Pantograph, a tool that provides a versatile interface to the Lean 4 proof assistant and enables efficient proof search via powerful search algorithms such as Monte Carlo Tree Search. In addition, Pantograph enables high-level reasoning by enabling a more robust handling of Lean 4's inference steps. We provide an overview of Pantograph's architecture and features. We also report on an illustrative use case: using machine learning models and proof sketches to prove Lean 4 theorems. Pantograph's innovative features pave the way for more advanced machine learning models to perform complex proof searches and high-level reasoning, equipping future researchers to design more versatile and powerful theorem provers. |
|
2024-10-25T00:00:00 | 2410.18194 | ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment | [
"Elyas Obbad",
"Iddah Mlauzi",
"Brando Miranda",
"Rylan Schaeffer",
"Kamal Obbad",
"Suhana Bedi",
"Sanmi Koyejo"
] | Data selection is crucial for optimizing language model (LM) performance on specific tasks, yet most existing methods fail to effectively consider the target task distribution. Current approaches either ignore task-specific requirements entirely or rely on approximations that fail to capture the nuanced patterns needed for tasks like Autoformalization or code generation. Methods that do consider the target distribution often rely on simplistic, sometimes noisy, representations, like hashed n-gram features, which can lead to collisions and introduce noise. We introduce ZIP-FIT, a data selection framework that uses gzip compression to directly measure alignment between potential training data and the target task distribution. In extensive evaluations on Autoformalization and Python code generation, ZIP-FIT significantly outperforms leading baselines like DSIR and D4. Models trained on ZIP-FIT-selected data achieve their lowest cross-entropy loss up to 85.1\% faster than baselines, demonstrating that better task alignment leads to more efficient learning. In addition, ZIP-FIT performs selection up to 65.8\% faster than DSIR and two orders of magnitude faster than D4. Notably, ZIP-FIT shows that smaller, well-aligned datasets often outperform larger but less targeted ones, demonstrating that a small amount of higher quality data is superior to a large amount of lower quality data. Our results imply that task-aware data selection is crucial for efficient domain adaptation, and that compression offers a principled way to measure task alignment. By showing that targeted data selection can dramatically improve task-specific performance, our work provides new insights into the relationship between data quality, task alignment, and model learning efficiency. |
|
2024-10-28T00:00:00 | 2410.19008 | Teach Multimodal LLMs to Comprehend Electrocardiographic Images | [
"Ruoqi Liu",
"Yuelin Bai",
"Xiang Yue",
"Ping Zhang"
] | The electrocardiogram (ECG) is an essential non-invasive diagnostic tool for assessing cardiac conditions. Existing automatic interpretation methods suffer from limited generalizability, focusing on a narrow range of cardiac conditions, and typically depend on raw physiological signals, which may not be readily available in resource-limited settings where only printed or digital ECG images are accessible. Recent advancements in multimodal large language models (MLLMs) present promising opportunities for addressing these challenges. However, the application of MLLMs to ECG image interpretation remains challenging due to the lack of instruction tuning datasets and well-established ECG image benchmarks for quantitative evaluation. To address these challenges, we introduce ECGInstruct, a comprehensive ECG image instruction tuning dataset of over one million samples, covering a wide range of ECG-related tasks from diverse data sources. Using ECGInstruct, we develop PULSE, an MLLM tailored for ECG image comprehension. In addition, we curate ECGBench, a new evaluation benchmark covering four key ECG image interpretation tasks across nine different datasets. Our experiments show that PULSE sets a new state-of-the-art, outperforming general MLLMs with an average accuracy improvement of 15% to 30%. This work highlights the potential of PULSE to enhance ECG interpretation in clinical practice. |
|
2024-10-28T00:00:00 | 2410.19168 | MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark | [
"S Sakshi",
"Utkarsh Tyagi",
"Sonal Kumar",
"Ashish Seth",
"Ramaneswaran Selvakumar",
"Oriol Nieto",
"Ramani Duraiswami",
"Sreyan Ghosh",
"Dinesh Manocha"
] | The ability to comprehend audio--which includes speech, non-speech sounds, and music--is crucial for AI agents to interact effectively with the world. We present MMAU, a novel benchmark designed to evaluate multimodal audio understanding models on tasks requiring expert-level knowledge and complex reasoning. MMAU comprises 10k carefully curated audio clips paired with human-annotated natural language questions and answers spanning speech, environmental sounds, and music. It includes information extraction and reasoning questions, requiring models to demonstrate 27 distinct skills across unique and challenging tasks. Unlike existing benchmarks, MMAU emphasizes advanced perception and reasoning with domain-specific knowledge, challenging models to tackle tasks akin to those faced by experts. We assess 18 open-source and proprietary (Large) Audio-Language Models, demonstrating the significant challenges posed by MMAU. Notably, even the most advanced Gemini Pro v1.5 achieves only 52.97% accuracy, and the state-of-the-art open-source Qwen2-Audio achieves only 52.50%, highlighting considerable room for improvement. We believe MMAU will drive the audio and multimodal research community to develop more advanced audio understanding models capable of solving complex audio tasks. |
|
2024-10-28T00:00:00 | 2410.18076 | Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration | [
"Max Wilcoxson",
"Qiyang Li",
"Kevin Frans",
"Sergey Levine"
] | https://github.com/rail-berkeley/supe | Unsupervised pretraining has been transformative in many supervised domains. However, applying such ideas to reinforcement learning (RL) presents a unique challenge in that fine-tuning does not involve mimicking task-specific data, but rather exploring and locating the solution through iterative self-improvement. In this work, we study how unlabeled prior trajectory data can be leveraged to learn efficient exploration strategies. While prior data can be used to pretrain a set of low-level skills, or as additional off-policy data for online RL, it has been unclear how to combine these ideas effectively for online exploration. Our method SUPE (Skills from Unlabeled Prior data for Exploration) demonstrates that a careful combination of these ideas compounds their benefits. Our method first extracts low-level skills using a variational autoencoder (VAE), and then pseudo-relabels unlabeled trajectories using an optimistic reward model, transforming prior data into high-level, task-relevant examples. Finally, SUPE uses these transformed examples as additional off-policy data for online RL to learn a high-level policy that composes pretrained low-level skills to explore efficiently. We empirically show that SUPE reliably outperforms prior strategies, successfully solving a suite of long-horizon, sparse-reward tasks. Code: https://github.com/rail-berkeley/supe. |
2024-10-28T00:00:00 | 2410.18558 | Infinity-MM: Scaling Multimodal Performance with Large-Scale and High-Quality Instruction Data | [
"Shuhao Gu",
"Jialing Zhang",
"Siyuan Zhou",
"Kevin Yu",
"Zhaohu Xing",
"Liangdong Wang",
"Zhou Cao",
"Jintao Jia",
"Zhuoyi Zhang",
"Yixuan Wang",
"Zhenchong Hu",
"Bo-Wen Zhang",
"Jijie Li",
"Dong Liang",
"Yingli Zhao",
"Yulong Ao",
"Yaoqi Liu",
"Fangxiang Feng",
"Guang Liu"
] | Vision-Language Models (VLMs) have recently made significant progress, but the limited scale and quality of open-source instruction data hinder their performance compared to closed-source models. In this work, we address this limitation by introducing Infinity-MM, a large-scale multimodal instruction dataset with 40 million samples, enhanced through rigorous quality filtering and deduplication. We also propose a synthetic instruction generation method based on open-source VLMs, using detailed image annotations and diverse question generation. Using this data, we trained a 2-billion-parameter VLM, Aquila-VL-2B, achieving state-of-the-art (SOTA) performance for models of similar scale. This demonstrates that expanding instruction data and generating synthetic data can significantly improve the performance of open-source models. |
|
2024-10-28T00:00:00 | 2410.19355 | FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality | [
"Zhengyao Lv",
"Chenyang Si",
"Junhao Song",
"Zhenyu Yang",
"Yu Qiao",
"Ziwei Liu",
"Kwan-Yee K. Wong"
] | In this paper, we present \textit{FasterCache}, a novel training-free strategy designed to accelerate the inference of video diffusion models with high-quality generation. By analyzing existing cache-based methods, we observe that directly reusing adjacent-step features degrades video quality due to the loss of subtle variations. We further perform a pioneering investigation of the acceleration potential of classifier-free guidance (CFG) and reveal significant redundancy between conditional and unconditional features within the same timestep. Capitalizing on these observations, we introduce FasterCache to substantially accelerate diffusion-based video generation. Our key contributions include a dynamic feature reuse strategy that preserves both feature distinction and temporal continuity, and CFG-Cache which optimizes the reuse of conditional and unconditional outputs to further enhance inference speed without compromising video quality. We empirically evaluate FasterCache on recent video diffusion models. Experimental results show that FasterCache can significantly accelerate video generation (\eg 1.67times speedup on Vchitect-2.0) while keeping video quality comparable to the baseline, and consistently outperform existing methods in both inference speed and video quality. |
|
2024-10-28T00:00:00 | 2410.17856 | ROCKET-1: Master Open-World Interaction with Visual-Temporal Context Prompting | [
"Shaofei Cai",
"Zihao Wang",
"Kewei Lian",
"Zhancun Mu",
"Xiaojian Ma",
"Anji Liu",
"Yitao Liang"
] | Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. A key issue is the difficulty in smoothly connecting individual entities in low-level observations with abstract concepts required for planning. A common approach to address this problem is through the use of hierarchical agents, where VLMs serve as high-level reasoners that break down tasks into executable sub-tasks, typically specified using language and imagined observations. However, language often fails to effectively convey spatial information, while generating future images with sufficient accuracy remains challenging. To address these limitations, we propose visual-temporal context prompting, a novel communication protocol between VLMs and policy models. This protocol leverages object segmentation from both past and present observations to guide policy-environment interactions. Using this approach, we train ROCKET-1, a low-level policy that predicts actions based on concatenated visual observations and segmentation masks, with real-time object tracking provided by SAM-2. Our method unlocks the full potential of VLMs visual-language reasoning abilities, enabling them to solve complex creative tasks, especially those heavily reliant on spatial understanding. Experiments in Minecraft demonstrate that our approach allows agents to accomplish previously unattainable tasks, highlighting the effectiveness of visual-temporal context prompting in embodied decision-making. Codes and demos will be available on the project page: https://craftjarvis.github.io/ROCKET-1. |
|
2024-10-28T00:00:00 | 2410.19290 | Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning | [
"Yujian Liu",
"Shiyu Chang",
"Tommi Jaakkola",
"Yang Zhang"
] | https://github.com/UCSB-NLP-Chang/Prereq_tune.git | Recent studies have identified one aggravating factor of LLM hallucinations as the knowledge inconsistency between pre-training and fine-tuning, where unfamiliar fine-tuning data mislead the LLM to fabricate plausible but wrong outputs. In this paper, we propose a novel fine-tuning strategy called Prereq-Tune to address this knowledge inconsistency and reduce hallucinations. Fundamentally, Prereq-Tune disentangles the learning of skills and knowledge, so the model learns only the task skills without being impacted by the knowledge inconsistency. To achieve this, Prereq-Tune introduces an additional prerequisite learning stage to learn the necessary knowledge for SFT, allowing subsequent SFT to focus only on task skills. Prereq-Tune can also be combined with fictitious synthetic data to enhance the grounding of LLM outputs to their internal knowledge. Experiments show that Prereq-Tune outperforms existing baselines in improving LLM's factuality across short QA and long-form generation tasks. It also opens new possibilities for knowledge-controlled generation in LLMs. Our code is available at https://github.com/UCSB-NLP-Chang/Prereq_tune.git. |
2024-10-28T00:00:00 | 2410.19133 | Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback | [
"Lester James V. Miranda",
"Yizhong Wang",
"Yanai Elazar",
"Sachin Kumar",
"Valentina Pyatkin",
"Faeze Brahman",
"Noah A. Smith",
"Hannaneh Hajishirzi",
"Pradeep Dasigi"
] | Learning from human feedback has enabled the alignment of language models (LMs) with human preferences. However, directly collecting human preferences can be expensive, time-consuming, and can have high variance. An appealing alternative is to distill preferences from LMs as a source of synthetic annotations as they are more consistent, cheaper, and scale better than human annotation; however, they are also prone to biases and errors. In this work, we introduce a routing framework that combines inputs from humans and LMs to achieve better annotation quality, while reducing the total cost of human annotation. The crux of our approach is to identify preference instances that will benefit from human annotations. We formulate this as an optimization problem: given a preference dataset and an evaluation metric, we train a performance prediction model to predict a reward model's performance on an arbitrary combination of human and LM annotations and employ a routing strategy that selects a combination that maximizes predicted performance. We train the performance prediction model on MultiPref, a new preference dataset with 10K instances paired with human and LM labels. We show that the selected hybrid mixture of LM and direct human preferences using our routing framework achieves better reward model performance compared to using either one exclusively. We simulate selective human preference collection on three other datasets and show that our method generalizes well to all three. We analyze features from the routing model to identify characteristics of instances that can benefit from human feedback, e.g., prompts with a moderate safety concern or moderate intent complexity. We release the dataset, annotation platform, and source code used in this study to foster more efficient and accurate preference collection in the future. |
|
2024-10-28T00:00:00 | 2410.16048 | Continuous Speech Synthesis using per-token Latent Diffusion | [
"Arnon Turetzky",
"Nimrod Shabtay",
"Slava Shechtman",
"Hagai Aronowitz",
"David Haws",
"Ron Hoory",
"Avihu Dekel"
] | The success of autoregressive transformer models with discrete tokens has inspired quantization-based approaches for continuous modalities, though these often limit reconstruction quality. We therefore introduce SALAD, a per-token latent diffusion model for zero-shot text-to-speech, that operates on continuous representations. SALAD builds upon the recently proposed expressive diffusion head for image generation, and extends it to generate variable-length outputs. Our approach utilizes semantic tokens for providing contextual information and determining the stopping condition. We suggest three continuous variants for our method, extending popular discrete speech synthesis techniques. Additionally, we implement discrete baselines for each variant and conduct a comparative analysis of discrete versus continuous speech modeling techniques. Our results demonstrate that both continuous and discrete approaches are highly competent, and that SALAD achieves a superior intelligibility score while obtaining speech quality and speaker similarity on par with the ground-truth audio. |
|
2024-10-28T00:00:00 | 2410.19730 | Counting Ability of Large Language Models and Impact of Tokenization | [
"Xiang Zhang",
"Juntai Cao",
"Chenyu You"
] | Transformers, the backbone of modern large language models (LLMs), face inherent architectural limitations that impede their reasoning capabilities. Unlike recurrent networks, Transformers lack recurrent connections, confining them to constant-depth computation. This restriction places them in the complexity class TC^0, making them theoretically incapable of solving tasks that demand increasingly deep reasoning as input length grows. Counting, a fundamental component of many reasoning tasks, also requires reasoning depth to grow linearly to be performed inductively. While previous studies have established the upper limits of counting ability in Transformer-based expert models (i.e., models specifically trained for counting tasks), these findings do not directly extend to general-purpose LLMs due to differences in reasoning mechanisms. Recent work has highlighted how Chain of Thought (CoT) reasoning can help alleviate some of the architectural limitations of Transformers in counting tasks. However, little attention has been paid to the role of tokenization in these models. Unlike expert models that often use character-level tokenization, LLMs typically rely on byte-level (BPE) tokenizers, which fundamentally alters the way reasoning is processed. Our work investigates the impact of tokenization on the counting abilities of LLMs, uncovering substantial performance variations based on input tokenization differences. We provide both theoretical and experimental analyses, offering insights into how tokenization choices can undermine models' theoretical computability, thereby inspiring the design of new tokenization methods to enhance reasoning in LLMs. |
|
2024-10-28T00:00:00 | 2410.16270 | Reflection-Bench: probing AI intelligence with reflection | [
"Lingyu Li",
"Yixu Wang",
"Haiquan Zhao",
"Shuqi Kong",
"Yan Teng",
"Chunbo Li",
"Yingchun Wang"
] | https://github.com/YabYum/ReflectionBench | The ability to adapt beliefs or behaviors in response to unexpected outcomes, reflection, is fundamental to intelligent systems' interaction with the world. From a cognitive science perspective, this serves as a core principle of intelligence applicable to both human and AI systems. To address the debate on the intelligence of large language models (LLMs), we propose Reflection-Bench, a comprehensive benchmark comprising 7 tasks spanning core cognitive functions crucial for reflection, including perception, memory, belief updating, decision-making, prediction, counterfactual thinking, and meta-reflection. We evaluate the performances of 13 prominent LLMs such as OpenAI o1, GPT-4, Claude 3.5 Sonnet, etc. The results indicate that current LLMs still lack satisfactory reflection ability. We discuss the underlying causes of these results and suggest potential avenues for future research. In conclusion, Reflection-Bench offers both evaluation tools and inspiration for developing AI capable of reliably interacting with the environment. Our data and code are available at https://github.com/YabYum/ReflectionBench. |
2024-10-28T00:00:00 | 2410.18912 | Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling | [
"Mingtong Zhang",
"Kaifeng Zhang",
"Yunzhu Li"
] | Videos of robots interacting with objects encode rich information about the objects' dynamics. However, existing video prediction approaches typically do not explicitly account for the 3D information from videos, such as robot actions and objects' 3D states, limiting their use in real-world robotic applications. In this work, we introduce a framework to learn object dynamics directly from multi-view RGB videos by explicitly considering the robot's action trajectories and their effects on scene dynamics. We utilize the 3D Gaussian representation of 3D Gaussian Splatting (3DGS) to train a particle-based dynamics model using Graph Neural Networks. This model operates on sparse control particles downsampled from the densely tracked 3D Gaussian reconstructions. By learning the neural dynamics model on offline robot interaction data, our method can predict object motions under varying initial configurations and unseen robot actions. The 3D transformations of Gaussians can be interpolated from the motions of control particles, enabling the rendering of predicted future object states and achieving action-conditioned video prediction. The dynamics model can also be applied to model-based planning frameworks for object manipulation tasks. We conduct experiments on various kinds of deformable materials, including ropes, clothes, and stuffed animals, demonstrating our framework's ability to model complex shapes and dynamics. Our project page is available at https://gs-dynamics.github.io. |
|
2024-10-28T00:00:00 | 2410.18889 | Are LLMs Better than Reported? Detecting Label Errors and Mitigating Their Effect on Model Performance | [
"Omer Nahum",
"Nitay Calderon",
"Orgad Keller",
"Idan Szpektor",
"Roi Reichart"
] | NLP benchmarks rely on standardized datasets for training and evaluating models and are crucial for advancing the field. Traditionally, expert annotations ensure high-quality labels; however, the cost of expert annotation does not scale well with the growing demand for larger datasets required by modern models. While crowd-sourcing provides a more scalable solution, it often comes at the expense of annotation precision and consistency. Recent advancements in large language models (LLMs) offer new opportunities to enhance the annotation process, particularly for detecting label errors in existing datasets. In this work, we consider the recent approach of LLM-as-a-judge, leveraging an ensemble of LLMs to flag potentially mislabeled examples. Through a case study of four datasets from the TRUE benchmark, covering different tasks and domains, we empirically analyze the labeling quality of existing datasets, and compare expert, crowd-sourced, and our LLM-based annotations in terms of agreement, label quality, and efficiency, demonstrating the strengths and limitations of each annotation method. Our findings reveal a substantial number of label errors, which, when corrected, induce a significant upward shift in reported model performance. This suggests that many of the LLMs so-called mistakes are due to label errors rather than genuine model failures. Additionally, we discuss the implications of mislabeled data and propose methods to mitigate them in training to improve model performance. |
|
2024-10-28T00:00:00 | 2410.16090 | Analysing the Residual Stream of Language Models Under Knowledge Conflicts | [
"Yu Zhao",
"Xiaotang Du",
"Giwon Hong",
"Aryo Pradipta Gema",
"Alessio Devoto",
"Hongru Wang",
"Xuanli He",
"Kam-Fai Wong",
"Pasquale Minervini"
] | Large language models (LLMs) can store a significant amount of factual knowledge in their parameters. However, their parametric knowledge may conflict with the information provided in the context. Such conflicts can lead to undesirable model behaviour, such as reliance on outdated or incorrect information. In this work, we investigate whether LLMs can identify knowledge conflicts and whether it is possible to know which source of knowledge the model will rely on by analysing the residual stream of the LLM. Through probing tasks, we find that LLMs can internally register the signal of knowledge conflict in the residual stream, which can be accurately detected by probing the intermediate model activations. This allows us to detect conflicts within the residual stream before generating the answers without modifying the input or model parameters. Moreover, we find that the residual stream shows significantly different patterns when the model relies on contextual knowledge versus parametric knowledge to resolve conflicts. This pattern can be employed to estimate the behaviour of LLMs when conflict happens and prevent unexpected answers before producing the answers. Our analysis offers insights into how LLMs internally manage knowledge conflicts and provides a foundation for developing methods to control the knowledge selection processes. |
|
2024-10-28T00:00:00 | 2410.17655 | Mapping the Media Landscape: Predicting Factual Reporting and Political Bias Through Web Interactions | [
"Dairazalia Sánchez-Cortés",
"Sergio Burdisso",
"Esaú Villatoro-Tello",
"Petr Motlicek"
] | Bias assessment of news sources is paramount for professionals, organizations, and researchers who rely on truthful evidence for information gathering and reporting. While certain bias indicators are discernible from content analysis, descriptors like political bias and fake news pose greater challenges. In this paper, we propose an extension to a recently presented news media reliability estimation method that focuses on modeling outlets and their longitudinal web interactions. Concretely, we assess the classification performance of four reinforcement learning strategies on a large news media hyperlink graph. Our experiments, targeting two challenging bias descriptors, factual reporting and political bias, showed a significant performance improvement at the source media level. Additionally, we validate our methods on the CLEF 2023 CheckThat! Lab challenge, outperforming the reported results in both, F1-score and the official MAE metric. Furthermore, we contribute by releasing the largest annotated dataset of news source media, categorized with factual reporting and political bias labels. Our findings suggest that profiling news media sources based on their hyperlink interactions over time is feasible, offering a bird's-eye view of evolving media landscapes. |
|
2024-10-28T00:00:00 | 2410.19123 | Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design | [
"Ruisi Cai",
"Yeonju Ro",
"Geon-Woo Kim",
"Peihao Wang",
"Babak Ehteshami Bejnordi",
"Aditya Akella",
"Zhangyang Wang"
] | https://github.com/VITA-Group/READ-ME | The proliferation of large language models (LLMs) has led to the adoption of Mixture-of-Experts (MoE) architectures that dynamically leverage specialized subnetworks for improved efficiency and performance. Despite their benefits, MoE models face significant challenges during inference, including inefficient memory management and suboptimal batching, due to misaligned design choices between the model architecture and the system policies. Furthermore, the conventional approach of training MoEs from scratch is increasingly prohibitive in terms of cost. In this paper, we propose a novel framework Read-ME that transforms pre-trained dense LLMs into smaller MoE models (in contrast to "upcycling" generalist MoEs), avoiding the high costs of ground-up training. Our approach employs activation sparsity to extract experts. To compose experts, we examine the widely-adopted layer-wise router design and show its redundancy, and thus we introduce the pre-gating router decoupled from the MoE backbone that facilitates system-friendly pre-computing and lookahead scheduling, enhancing expert-aware batching and caching. Our codesign therefore addresses critical gaps on both the algorithmic and system fronts, establishing a scalable and efficient alternative for LLM inference in resource-constrained settings. Read-ME outperforms other popular open-source dense models of similar scales, achieving improvements of up to 10.1% on MMLU, and improving mean end-to-end latency up to 6.1%. Codes are available at: https://github.com/VITA-Group/READ-ME. |
2024-10-29T00:00:00 | 2410.21252 | LongReward: Improving Long-context Large Language Models with AI Feedback | [
"Jiajie Zhang",
"Zhongni Hou",
"Xin Lv",
"Shulin Cao",
"Zhenyu Hou",
"Yilin Niu",
"Lei Hou",
"Yuxiao Dong",
"Ling Feng",
"Juanzi Li"
] | Though significant advancements have been achieved in developing long-context large language models (LLMs), the compromised quality of LLM-synthesized data for supervised fine-tuning (SFT) often affects the long-context performance of SFT models and leads to inherent limitations. In principle, reinforcement learning (RL) with appropriate reward signals can further enhance models' capacities. However, how to obtain reliable rewards in long-context scenarios remains unexplored. To this end, we propose LongReward, a novel method that utilizes an off-the-shelf LLM to provide rewards for long-context model responses from four human-valued dimensions: helpfulness, logicality, faithfulness, and completeness, each with a carefully designed assessment pipeline. By combining LongReward and offline RL algorithm DPO, we are able to effectively improve long-context SFT models. Our experiments indicate that LongReward not only significantly improves models' long-context performance but also enhances their ability to follow short instructions. We also find that long-context DPO with LongReward and conventional short-context DPO can be used together without hurting either one's performance. |
|
2024-10-29T00:00:00 | 2410.21220 | Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines | [
"Zhixin Zhang",
"Yiyuan Zhang",
"Xiaohan Ding",
"Xiangyu Yue"
] | Search engines enable the retrieval of unknown information with texts. However, traditional methods fall short when it comes to understanding unfamiliar visual content, such as identifying an object that the model has never seen before. This challenge is particularly pronounced for large vision-language models (VLMs): if the model has not been exposed to the object depicted in an image, it struggles to generate reliable answers to the user's question regarding that image. Moreover, as new objects and events continuously emerge, frequently updating VLMs is impractical due to heavy computational burdens. To address this limitation, we propose Vision Search Assistant, a novel framework that facilitates collaboration between VLMs and web agents. This approach leverages VLMs' visual understanding capabilities and web agents' real-time information access to perform open-world Retrieval-Augmented Generation via the web. By integrating visual and textual representations through this collaboration, the model can provide informed responses even when the image is novel to the system. Extensive experiments conducted on both open-set and closed-set QA benchmarks demonstrate that the Vision Search Assistant significantly outperforms the other models and can be widely applied to existing VLMs. |
|
2024-10-29T00:00:00 | 2410.20011 | A Survey of Small Language Models | [
"Chien Van Nguyen",
"Xuan Shen",
"Ryan Aponte",
"Yu Xia",
"Samyadeep Basu",
"Zhengmian Hu",
"Jian Chen",
"Mihir Parmar",
"Sasidhar Kunapuli",
"Joe Barrow",
"Junda Wu",
"Ashish Singh",
"Yu Wang",
"Jiuxiang Gu",
"Franck Dernoncourt",
"Nesreen K. Ahmed",
"Nedim Lipka",
"Ruiyi Zhang",
"Xiang Chen",
"Tong Yu",
"Sungchul Kim",
"Hanieh Deilamsalehy",
"Namyong Park",
"Mike Rimer",
"Zhehao Zhang",
"Huanrui Yang",
"Ryan A. Rossi",
"Thien Huu Nguyen"
] | Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models. |
|
2024-10-29T00:00:00 | 2410.21264 | LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior | [
"Hanyu Wang",
"Saksham Suri",
"Yixuan Ren",
"Hao Chen",
"Abhinav Shrivastava"
] | We present LARP, a novel video tokenizer designed to overcome limitations in current video tokenization methods for autoregressive (AR) generative models. Unlike traditional patchwise tokenizers that directly encode local visual patches into discrete tokens, LARP introduces a holistic tokenization scheme that gathers information from the visual content using a set of learned holistic queries. This design allows LARP to capture more global and semantic representations, rather than being limited to local patch-level information. Furthermore, it offers flexibility by supporting an arbitrary number of discrete tokens, enabling adaptive and efficient tokenization based on the specific requirements of the task. To align the discrete token space with downstream AR generation tasks, LARP integrates a lightweight AR transformer as a training-time prior model that predicts the next token on its discrete latent space. By incorporating the prior model during training, LARP learns a latent space that is not only optimized for video reconstruction but is also structured in a way that is more conducive to autoregressive generation. Moreover, this process defines a sequential order for the discrete tokens, progressively pushing them toward an optimal configuration during training, ensuring smoother and more accurate AR generation at inference time. Comprehensive experiments demonstrate LARP's strong performance, achieving state-of-the-art FVD on the UCF101 class-conditional video generation benchmark. LARP enhances the compatibility of AR models with videos and opens up the potential to build unified high-fidelity multimodal large language models (MLLMs). |
|
2024-10-29T00:00:00 | 2410.18666 | DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation | [
"Yuang Ai",
"Xiaoqiang Zhou",
"Huaibo Huang",
"Xiaotian Han",
"Zhengyu Chen",
"Quanzeng You",
"Hongxia Yang"
] | https://github.com/shallowdream204/DreamClear | Image restoration (IR) in real-world scenarios presents significant challenges due to the lack of high-capacity models and comprehensive datasets. To tackle these issues, we present a dual strategy: GenIR, an innovative data curation pipeline, and DreamClear, a cutting-edge Diffusion Transformer (DiT)-based image restoration model. GenIR, our pioneering contribution, is a dual-prompt learning pipeline that overcomes the limitations of existing datasets, which typically comprise only a few thousand images and thus offer limited generalizability for larger models. GenIR streamlines the process into three stages: image-text pair construction, dual-prompt based fine-tuning, and data generation & filtering. This approach circumvents the laborious data crawling process, ensuring copyright compliance and providing a cost-effective, privacy-safe solution for IR dataset construction. The result is a large-scale dataset of one million high-quality images. Our second contribution, DreamClear, is a DiT-based image restoration model. It utilizes the generative priors of text-to-image (T2I) diffusion models and the robust perceptual capabilities of multi-modal large language models (MLLMs) to achieve photorealistic restoration. To boost the model's adaptability to diverse real-world degradations, we introduce the Mixture of Adaptive Modulator (MoAM). It employs token-wise degradation priors to dynamically integrate various restoration experts, thereby expanding the range of degradations the model can address. Our exhaustive experiments confirm DreamClear's superior performance, underlining the efficacy of our dual strategy for real-world image restoration. Code and pre-trained models will be available at: https://github.com/shallowdream204/DreamClear. |
2024-10-29T00:00:00 | 2410.20280 | MarDini: Masked Autoregressive Diffusion for Video Generation at Scale | [
"Haozhe Liu",
"Shikun Liu",
"Zijian Zhou",
"Mengmeng Xu",
"Yanping Xie",
"Xiao Han",
"Juan C. Pérez",
"Ding Liu",
"Kumara Kahatapitiya",
"Menglin Jia",
"Jui-Chieh Wu",
"Sen He",
"Tao Xiang",
"Jürgen Schmidhuber",
"Juan-Manuel Pérez-Rúa"
] | We introduce MarDini, a new family of video diffusion models that integrate the advantages of masked auto-regression (MAR) into a unified diffusion model (DM) framework. Here, MAR handles temporal planning, while DM focuses on spatial generation in an asymmetric network design: i) a MAR-based planning model containing most of the parameters generates planning signals for each masked frame using low-resolution input; ii) a lightweight generation model uses these signals to produce high-resolution frames via diffusion de-noising. MarDini's MAR enables video generation conditioned on any number of masked frames at any frame positions: a single model can handle video interpolation (e.g., masking middle frames), image-to-video generation (e.g., masking from the second frame onward), and video expansion (e.g., masking half the frames). The efficient design allocates most of the computational resources to the low-resolution planning model, making computationally expensive but important spatio-temporal attention feasible at scale. MarDini sets a new state-of-the-art for video interpolation; meanwhile, within few inference steps, it efficiently generates videos on par with those of much more expensive advanced image-to-video models. |
|
2024-10-29T00:00:00 | 2410.19313 | COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training | [
"Haocheng Xi",
"Han Cai",
"Ligeng Zhu",
"Yao Lu",
"Kurt Keutzer",
"Jianfei Chen",
"Song Han"
] | https://github.com/NVlabs/COAT | FP8 training has emerged as a promising method for improving training efficiency. Existing frameworks accelerate training by applying FP8 computation to linear layers while leaving optimizer states and activations in higher precision, which fails to fully optimize memory usage. This paper introduces COAT (Compressing Optimizer States and Activations for FP8 Training), a novel FP8 training framework designed to significantly reduce memory footprint when training large models. COAT addresses current limitations through two key innovations: (1) Dynamic Range Expansion, which aligns optimizer state distributions more closely with the FP8 representation range, thereby reducing quantization error, and (2) Mixed-Granularity Activation Quantization, which optimizes activation memory using a combination of per-tensor and per-group quantization strategies. Experiments demonstrate that COAT effectively reduces end-to-end training memory footprint by 1.54x compared to BF16 while achieving nearly lossless performance across various tasks, such as Large Language Model pretraining and fine-tuning and Vision Language Model training. COAT also achieves a 1.43x end-to-end training speedup compared to BF16, performing on par with or surpassing TransformerEngine's speedup. COAT enables efficient full-parameter training of large models on fewer GPUs, and facilitates doubling the batch size in distributed training settings, providing a practical solution for scaling large-scale model training. The code is available at https://github.com/NVlabs/COAT. |
2024-10-29T00:00:00 | 2410.20474 | GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation | [
"Phillip Y. Lee",
"Taehoon Yoon",
"Minhyuk Sung"
] | We introduce a novel training-free spatial grounding technique for text-to-image generation using Diffusion Transformers (DiT). Spatial grounding with bounding boxes has gained attention for its simplicity and versatility, allowing for enhanced user control in image generation. However, prior training-free approaches often rely on updating the noisy image during the reverse diffusion process via backpropagation from custom loss functions, which frequently struggle to provide precise control over individual bounding boxes. In this work, we leverage the flexibility of the Transformer architecture, demonstrating that DiT can generate noisy patches corresponding to each bounding box, fully encoding the target object and allowing for fine-grained control over each region. Our approach builds on an intriguing property of DiT, which we refer to as semantic sharing. Due to semantic sharing, when a smaller patch is jointly denoised alongside a generatable-size image, the two become "semantic clones". Each patch is denoised in its own branch of the generation process and then transplanted into the corresponding region of the original noisy image at each timestep, resulting in robust spatial grounding for each bounding box. In our experiments on the HRS and DrawBench benchmarks, we achieve state-of-the-art performance compared to previous training-free spatial grounding approaches. |
|
2024-10-29T00:00:00 | 2410.20636 | Language Models And A Second Opinion Use Case: The Pocket Professional | [
"David Noever"
] | This research tests the role of Large Language Models (LLMs) as formal second opinion tools in professional decision-making, particularly focusing on complex medical cases where even experienced physicians seek peer consultation. The work analyzed 183 challenging medical cases from Medscape over a 20-month period, testing multiple LLMs' performance against crowd-sourced physician responses. A key finding was the high overall score possible in the latest foundational models (>80% accuracy compared to consensus opinion), which exceeds most human metrics reported on the same clinical cases (450 pages of patient profiles, test results). The study rates the LLMs' performance disparity between straightforward cases (>81% accuracy) and complex scenarios (43% accuracy), particularly in these cases generating substantial debate among human physicians. The research demonstrates that LLMs may be valuable as generators of comprehensive differential diagnoses rather than as primary diagnostic tools, potentially helping to counter cognitive biases in clinical decision-making, reduce cognitive loads, and thus remove some sources of medical error. The inclusion of a second comparative legal dataset (Supreme Court cases, N=21) provides added empirical context to the AI use to foster second opinions, though these legal challenges proved considerably easier for LLMs to analyze. In addition to the original contributions of empirical evidence for LLM accuracy, the research aggregated a novel benchmark for others to score highly contested question and answer reliability between both LLMs and disagreeing human practitioners. These results suggest that the optimal deployment of LLMs in professional settings may differ substantially from current approaches that emphasize automation of routine tasks. |
|
2024-10-29T00:00:00 | 2410.20220 | Neural Fields in Robotics: A Survey | [
"Muhammad Zubair Irshad",
"Mauro Comi",
"Yen-Chen Lin",
"Nick Heppert",
"Abhinav Valada",
"Rares Ambrus",
"Zsolt Kira",
"Jonathan Tremblay"
] | Neural Fields have emerged as a transformative approach for 3D scene representation in computer vision and robotics, enabling accurate inference of geometry, 3D semantics, and dynamics from posed 2D data. Leveraging differentiable rendering, Neural Fields encompass both continuous implicit and explicit neural representations enabling high-fidelity 3D reconstruction, integration of multi-modal sensor data, and generation of novel viewpoints. This survey explores their applications in robotics, emphasizing their potential to enhance perception, planning, and control. Their compactness, memory efficiency, and differentiability, along with seamless integration with foundation and generative models, make them ideal for real-time applications, improving robot adaptability and decision-making. This paper provides a thorough review of Neural Fields in robotics, categorizing applications across various domains and evaluating their strengths and limitations, based on over 200 papers. First, we present four key Neural Fields frameworks: Occupancy Networks, Signed Distance Fields, Neural Radiance Fields, and Gaussian Splatting. Second, we detail Neural Fields' applications in five major robotics domains: pose estimation, manipulation, navigation, physics, and autonomous driving, highlighting key works and discussing takeaways and open challenges. Finally, we outline the current limitations of Neural Fields in robotics and propose promising directions for future research. Project page: https://robonerf.github.io |
|
2024-10-29T00:00:00 | 2410.21276 | GPT-4o System Card | [
"OpenAI",
"Aaron Hurst",
"Adam Lerer",
"Adam P. Goucher",
"Adam Perelman",
"Aditya Ramesh",
"Aidan Clark",
"AJ Ostrow",
"Akila Welihinda",
"Alan Hayes",
"Alec Radford",
"Aleksander Mądry",
"Alex Baker-Whitcomb",
"Alex Beutel",
"Alex Borzunov",
"Alex Carney",
"Alex Chow",
"Alex Kirillov",
"Alex Nichol",
"Alex Paino",
"Alex Renzin",
"Alex Tachard Passos",
"Alexander Kirillov",
"Alexi Christakis",
"Alexis Conneau",
"Ali Kamali",
"Allan Jabri",
"Allison Moyer",
"Allison Tam",
"Amadou Crookes",
"Amin Tootoochian",
"Amin Tootoonchian",
"Ananya Kumar",
"Andrea Vallone",
"Andrej Karpathy",
"Andrew Braunstein",
"Andrew Cann",
"Andrew Codispoti",
"Andrew Galu",
"Andrew Kondrich",
"Andrew Tulloch",
"Andrey Mishchenko",
"Angela Baek",
"Angela Jiang",
"Antoine Pelisse",
"Antonia Woodford",
"Anuj Gosalia",
"Arka Dhar",
"Ashley Pantuliano",
"Avi Nayak",
"Avital Oliver",
"Barret Zoph",
"Behrooz Ghorbani",
"Ben Leimberger",
"Ben Rossen",
"Ben Sokolowsky",
"Ben Wang",
"Benjamin Zweig",
"Beth Hoover",
"Blake Samic",
"Bob McGrew",
"Bobby Spero",
"Bogo Giertler",
"Bowen Cheng",
"Brad Lightcap",
"Brandon Walkin",
"Brendan Quinn",
"Brian Guarraci",
"Brian Hsu",
"Bright Kellogg",
"Brydon Eastman",
"Camillo Lugaresi",
"Carroll Wainwright",
"Cary Bassin",
"Cary Hudson",
"Casey Chu",
"Chad Nelson",
"Chak Li",
"Chan Jun Shern",
"Channing Conger",
"Charlotte Barette",
"Chelsea Voss",
"Chen Ding",
"Cheng Lu",
"Chong Zhang",
"Chris Beaumont",
"Chris Hallacy",
"Chris Koch",
"Christian Gibson",
"Christina Kim",
"Christine Choi",
"Christine McLeavey",
"Christopher Hesse",
"Claudia Fischer",
"Clemens Winter",
"Coley Czarnecki",
"Colin Jarvis",
"Colin Wei",
"Constantin Koumouzelis",
"Dane Sherburn",
"Daniel Kappler",
"Daniel Levin",
"Daniel Levy",
"David Carr",
"David Farhi",
"David Mely",
"David Robinson",
"David Sasaki",
"Denny Jin",
"Dev Valladares",
"Dimitris Tsipras",
"Doug Li",
"Duc Phong Nguyen",
"Duncan Findlay",
"Edede Oiwoh",
"Edmund Wong",
"Ehsan Asdar",
"Elizabeth Proehl",
"Elizabeth Yang",
"Eric Antonow",
"Eric Kramer",
"Eric Peterson",
"Eric Sigler",
"Eric Wallace",
"Eugene Brevdo",
"Evan Mays",
"Farzad Khorasani",
"Felipe Petroski Such",
"Filippo Raso",
"Francis Zhang",
"Fred von Lohmann",
"Freddie Sulit",
"Gabriel Goh",
"Gene Oden",
"Geoff Salmon",
"Giulio Starace",
"Greg Brockman",
"Hadi Salman",
"Haiming Bao",
"Haitang Hu",
"Hannah Wong",
"Haoyu Wang",
"Heather Schmidt",
"Heather Whitney",
"Heewoo Jun",
"Hendrik Kirchner",
"Henrique Ponde de Oliveira Pinto",
"Hongyu Ren",
"Huiwen Chang",
"Hyung Won Chung",
"Ian Kivlichan",
"Ian O'Connell",
"Ian O'Connell",
"Ian Osband",
"Ian Silber",
"Ian Sohl",
"Ibrahim Okuyucu",
"Ikai Lan",
"Ilya Kostrikov",
"Ilya Sutskever",
"Ingmar Kanitscheider",
"Ishaan Gulrajani",
"Jacob Coxon",
"Jacob Menick",
"Jakub Pachocki",
"James Aung",
"James Betker",
"James Crooks",
"James Lennon",
"Jamie Kiros",
"Jan Leike",
"Jane Park",
"Jason Kwon",
"Jason Phang",
"Jason Teplitz",
"Jason Wei",
"Jason Wolfe",
"Jay Chen",
"Jeff Harris",
"Jenia Varavva",
"Jessica Gan Lee",
"Jessica Shieh",
"Ji Lin",
"Jiahui Yu",
"Jiayi Weng",
"Jie Tang",
"Jieqi Yu",
"Joanne Jang",
"Joaquin Quinonero Candela",
"Joe Beutler",
"Joe Landers",
"Joel Parish",
"Johannes Heidecke",
"John Schulman",
"Jonathan Lachman",
"Jonathan McKay",
"Jonathan Uesato",
"Jonathan Ward",
"Jong Wook Kim",
"Joost Huizinga",
"Jordan Sitkin",
"Jos Kraaijeveld",
"Josh Gross",
"Josh Kaplan",
"Josh Snyder",
"Joshua Achiam",
"Joy Jiao",
"Joyce Lee",
"Juntang Zhuang",
"Justyn Harriman",
"Kai Fricke",
"Kai Hayashi",
"Karan Singhal",
"Katy Shi",
"Kavin Karthik",
"Kayla Wood",
"Kendra Rimbach",
"Kenny Hsu",
"Kenny Nguyen",
"Keren Gu-Lemberg",
"Kevin Button",
"Kevin Liu",
"Kiel Howe",
"Krithika Muthukumar",
"Kyle Luther",
"Lama Ahmad",
"Larry Kai",
"Lauren Itow",
"Lauren Workman",
"Leher Pathak",
"Leo Chen",
"Li Jing",
"Lia Guy",
"Liam Fedus",
"Liang Zhou",
"Lien Mamitsuka",
"Lilian Weng",
"Lindsay McCallum",
"Lindsey Held",
"Long Ouyang",
"Louis Feuvrier",
"Lu Zhang",
"Lukas Kondraciuk",
"Lukasz Kaiser",
"Luke Hewitt",
"Luke Metz",
"Lyric Doshi",
"Mada Aflak",
"Maddie Simens",
"Madelaine Boyd",
"Madeleine Thompson",
"Marat Dukhan",
"Mark Chen",
"Mark Gray",
"Mark Hudnall",
"Marvin Zhang",
"Marwan Aljubeh",
"Mateusz Litwin",
"Matthew Zeng",
"Max Johnson",
"Maya Shetty",
"Mayank Gupta",
"Meghan Shah",
"Mehmet Yatbaz",
"Meng Jia Yang",
"Mengchao Zhong",
"Mia Glaese",
"Mianna Chen",
"Michael Janner",
"Michael Lampe",
"Michael Petrov",
"Michael Wu",
"Michele Wang",
"Michelle Fradin",
"Michelle Pokrass",
"Miguel Castro",
"Miguel Oom Temudo de Castro",
"Mikhail Pavlov",
"Miles Brundage",
"Miles Wang",
"Minal Khan",
"Mira Murati",
"Mo Bavarian",
"Molly Lin",
"Murat Yesildal",
"Nacho Soto",
"Natalia Gimelshein",
"Natalie Cone",
"Natalie Staudacher",
"Natalie Summers",
"Natan LaFontaine",
"Neil Chowdhury",
"Nick Ryder",
"Nick Stathas",
"Nick Turley",
"Nik Tezak",
"Niko Felix",
"Nithanth Kudige",
"Nitish Keskar",
"Noah Deutsch",
"Noel Bundick",
"Nora Puckett",
"Ofir Nachum",
"Ola Okelola",
"Oleg Boiko",
"Oleg Murk",
"Oliver Jaffe",
"Olivia Watkins",
"Olivier Godement",
"Owen Campbell-Moore",
"Patrick Chao",
"Paul McMillan",
"Pavel Belov",
"Peng Su",
"Peter Bak",
"Peter Bakkum",
"Peter Deng",
"Peter Dolan",
"Peter Hoeschele",
"Peter Welinder",
"Phil Tillet",
"Philip Pronin",
"Philippe Tillet",
"Prafulla Dhariwal",
"Qiming Yuan",
"Rachel Dias",
"Rachel Lim",
"Rahul Arora",
"Rajan Troll",
"Randall Lin",
"Rapha Gontijo Lopes",
"Raul Puri",
"Reah Miyara",
"Reimar Leike",
"Renaud Gaubert",
"Reza Zamani",
"Ricky Wang",
"Rob Donnelly",
"Rob Honsby",
"Rocky Smith",
"Rohan Sahai",
"Rohit Ramchandani",
"Romain Huet",
"Rory Carmichael",
"Rowan Zellers",
"Roy Chen",
"Ruby Chen",
"Ruslan Nigmatullin",
"Ryan Cheu",
"Saachi Jain",
"Sam Altman",
"Sam Schoenholz",
"Sam Toizer",
"Samuel Miserendino",
"Sandhini Agarwal",
"Sara Culver",
"Scott Ethersmith",
"Scott Gray",
"Sean Grove",
"Sean Metzger",
"Shamez Hermani",
"Shantanu Jain",
"Shengjia Zhao",
"Sherwin Wu",
"Shino Jomoto",
"Shirong Wu",
"Shuaiqi",
"Xia",
"Sonia Phene",
"Spencer Papay",
"Srinivas Narayanan",
"Steve Coffey",
"Steve Lee",
"Stewart Hall",
"Suchir Balaji",
"Tal Broda",
"Tal Stramer",
"Tao Xu",
"Tarun Gogineni",
"Taya Christianson",
"Ted Sanders",
"Tejal Patwardhan",
"Thomas Cunninghman",
"Thomas Degry",
"Thomas Dimson",
"Thomas Raoux",
"Thomas Shadwell",
"Tianhao Zheng",
"Todd Underwood",
"Todor Markov",
"Toki Sherbakov",
"Tom Rubin",
"Tom Stasi",
"Tomer Kaftan",
"Tristan Heywood",
"Troy Peterson",
"Tyce Walters",
"Tyna Eloundou",
"Valerie Qi",
"Veit Moeller",
"Vinnie Monaco",
"Vishal Kuo",
"Vlad Fomenko",
"Wayne Chang",
"Weiyi Zheng",
"Wenda Zhou",
"Wesam Manassra",
"Will Sheu",
"Wojciech Zaremba",
"Yash Patil",
"Yilei Qian",
"Yongjik Kim",
"Youlong Cheng",
"Yu Zhang",
"Yuchen He",
"Yuchen Zhang",
"Yujia Jin",
"Yunxing Dai",
"Yury Malkov"
] | GPT-4o is an autoregressive omni model that accepts as input any combination of text, audio, image, and video, and generates any combination of text, audio, and image outputs. It's trained end-to-end across text, vision, and audio, meaning all inputs and outputs are processed by the same neural network. GPT-4o can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50\% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models. In line with our commitment to building AI safely and consistent with our voluntary commitments to the White House, we are sharing the GPT-4o System Card, which includes our Preparedness Framework evaluations. In this System Card, we provide a detailed look at GPT-4o's capabilities, limitations, and safety evaluations across multiple categories, focusing on speech-to-speech while also evaluating text and image capabilities, and measures we've implemented to ensure the model is safe and aligned. We also include third-party assessments on dangerous capabilities, as well as discussion of potential societal impacts of GPT-4o's text and vision capabilities. |
|
2024-10-29T00:00:00 | 2410.18603 | AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant | [
"Chengyou Jia",
"Minnan Luo",
"Zhuohang Dang",
"Qiushi Sun",
"Fangzhi Xu",
"Junlin Hu",
"Tianbao Xie",
"Zhiyong Wu"
] | Digital agents capable of automating complex computer tasks have attracted considerable attention due to their immense potential to enhance human-computer interaction. However, existing agent methods exhibit deficiencies in their generalization and specialization capabilities, especially in handling open-ended computer tasks in real-world environments. Inspired by the rich functionality of the App store, we present AgentStore, a scalable platform designed to dynamically integrate heterogeneous agents for automating computer tasks. AgentStore empowers users to integrate third-party agents, allowing the system to continuously enrich its capabilities and adapt to rapidly evolving operating systems. Additionally, we propose a novel core MetaAgent with the AgentToken strategy to efficiently manage diverse agents and utilize their specialized and generalist abilities for both domain-specific and system-wide tasks. Extensive experiments on three challenging benchmarks demonstrate that AgentStore surpasses the limitations of previous systems with narrow capabilities, particularly achieving a significant improvement from 11.21\% to 23.85\% on the OSWorld benchmark, more than doubling the previous results. Comprehensive quantitative and qualitative results further demonstrate AgentStore's ability to enhance agent systems in both generalization and specialization, underscoring its potential for developing the specialized generalist computer assistant. All our codes will be made publicly available in https://chengyou-jia.github.io/AgentStore-Home. |
|
2024-10-29T00:00:00 | 2410.20290 | Fast Best-of-N Decoding via Speculative Rejection | [
"Hanshi Sun",
"Momin Haider",
"Ruiqi Zhang",
"Huitao Yang",
"Jiahao Qiu",
"Ming Yin",
"Mengdi Wang",
"Peter Bartlett",
"Andrea Zanette"
] | The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient. |
|
2024-10-29T00:00:00 | 2410.18565 | Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation | [
"Krzysztof Ociepa",
"Łukasz Flis",
"Krzysztof Wróbel",
"Adrian Gwoździej",
"Remigiusz Kinas"
] | We introduce Bielik 7B v0.1, a 7-billion-parameter generative text model for Polish language processing. Trained on curated Polish corpora, this model addresses key challenges in language model development through innovative techniques. These include Weighted Instruction Cross-Entropy Loss, which balances the learning of different instruction types, and Adaptive Learning Rate, which dynamically adjusts the learning rate based on training progress. To evaluate performance, we created the Open PL LLM Leaderboard and Polish MT-Bench, novel frameworks assessing various NLP tasks and conversational abilities. Bielik 7B v0.1 demonstrates significant improvements, achieving a 9 percentage point increase in average score compared to Mistral-7B-v0.1 on the RAG Reader task. It also excels in the Polish MT-Bench, particularly in Reasoning (6.15/10) and Role-playing (7.83/10) categories. This model represents a substantial advancement in Polish language AI, offering a powerful tool for diverse linguistic applications and setting new benchmarks in the field. |
|
2024-10-29T00:00:00 | 2410.21169 | Document Parsing Unveiled: Techniques, Challenges, and Prospects for Structured Information Extraction | [
"Qintong Zhang",
"Victor Shea-Jay Huang",
"Bin Wang",
"Junyuan Zhang",
"Zhengren Wang",
"Hao Liang",
"Shawn Wang",
"Matthieu Lin",
"Wentao Zhang",
"Conghui He"
] | Document parsing is essential for converting unstructured and semi-structured documents-such as contracts, academic papers, and invoices-into structured, machine-readable data. Document parsing extract reliable structured data from unstructured inputs, providing huge convenience for numerous applications. Especially with recent achievements in Large Language Models, document parsing plays an indispensable role in both knowledge base construction and training data generation. This survey presents a comprehensive review of the current state of document parsing, covering key methodologies, from modular pipeline systems to end-to-end models driven by large vision-language models. Core components such as layout detection, content extraction (including text, tables, and mathematical expressions), and multi-modal data integration are examined in detail. Additionally, this paper discusses the challenges faced by modular document parsing systems and vision-language models in handling complex layouts, integrating multiple modules, and recognizing high-density text. It emphasizes the importance of developing larger and more diverse datasets and outlines future research directions. |
|
2024-10-29T00:00:00 | 2406.10615 | Leveraging Locality to Boost Sample Efficiency in Robotic Manipulation | [
"Tong Zhang",
"Yingdong Hu",
"Jiacheng You",
"Yang Gao"
] | Given the high cost of collecting robotic data in the real world, sample efficiency is a consistently compelling pursuit in robotics. In this paper, we introduce SGRv2, an imitation learning framework that enhances sample efficiency through improved visual and action representations. Central to the design of SGRv2 is the incorporation of a critical inductive bias-action locality, which posits that robot's actions are predominantly influenced by the target object and its interactions with the local environment. Extensive experiments in both simulated and real-world settings demonstrate that action locality is essential for boosting sample efficiency. SGRv2 excels in RLBench tasks with keyframe control using merely 5 demonstrations and surpasses the RVT baseline in 23 of 26 tasks. Furthermore, when evaluated on ManiSkill2 and MimicGen using dense control, SGRv2's success rate is 2.54 times that of SGR. In real-world environments, with only eight demonstrations, SGRv2 can perform a variety of tasks at a markedly higher success rate compared to baseline models. Project website: http://sgrv2-robot.github.io |
|
2024-10-29T00:00:00 | 2410.18481 | Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction | [
"Sergio Burdisso",
"Srikanth Madikeri",
"Petr Motlicek"
] | Efficiently deriving structured workflows from unannotated dialogs remains an underexplored and formidable challenge in computational linguistics. Automating this process could significantly accelerate the manual design of workflows in new domains and enable the grounding of large language models in domain-specific flowcharts, enhancing transparency and controllability. In this paper, we introduce Dialog2Flow (D2F) embeddings, which differ from conventional sentence embeddings by mapping utterances to a latent space where they are grouped according to their communicative and informative functions (i.e., the actions they represent). D2F allows for modeling dialogs as continuous trajectories in a latent space with distinct action-related regions. By clustering D2F embeddings, the latent space is quantized, and dialogs can be converted into sequences of region/action IDs, facilitating the extraction of the underlying workflow. To pre-train D2F, we build a comprehensive dataset by unifying twenty task-oriented dialog datasets with normalized per-turn action annotations. We also introduce a novel soft contrastive loss that leverages the semantic information of these actions to guide the representation learning process, showing superior performance compared to standard supervised contrastive loss. Evaluation against various sentence embeddings, including dialog-specific ones, demonstrates that D2F yields superior qualitative and quantitative results across diverse domains. |
|
2024-10-29T00:00:00 | 2410.19100 | VideoWebArena: Evaluating Long Context Multimodal Agents with Video Understanding Web Tasks | [
"Lawrence Jang",
"Yinheng Li",
"Charles Ding",
"Justin Lin",
"Paul Pu Liang",
"Dan Zhao",
"Rogerio Bonatti",
"Kazuhito Koishida"
] | Videos are often used to learn or extract the necessary information to complete tasks in ways different than what text and static imagery alone can provide. However, many existing agent benchmarks neglect long-context video understanding, instead focusing on text or static image inputs. To bridge this gap, we introduce VideoWebArena (VideoWA), a benchmark for evaluating the capabilities of long-context multimodal agents for video understanding. VideoWA consists of 2,021 web agent tasks based on manually crafted video tutorials, which total almost four hours of content. For our benchmark, we define a taxonomy of long-context video-based agent tasks with two main areas of focus: skill retention and factual retention. While skill retention tasks evaluate whether an agent can use a given human demonstration to complete a task efficiently, the factual retention task evaluates whether an agent can retrieve instruction-relevant information from a video to complete a task. We find that the best model achieves 13.3% success on factual retention tasks and 45.8% on factual retention QA pairs, far below human performance at 73.9% and 79.3%, respectively. On skill retention tasks, long-context models perform worse with tutorials than without, exhibiting a 5% performance decrease in WebArena tasks and a 10.3% decrease in VisualWebArena tasks. Our work highlights the need to improve the agentic abilities of long-context multimodal models and provides a testbed for future development with long-context video agents. |
|
2024-10-29T00:00:00 | 2410.20672 | Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA | [
"Sangmin Bae",
"Adam Fisch",
"Hrayr Harutyunyan",
"Ziwei Ji",
"Seungyeon Kim",
"Tal Schuster"
] | Large language models (LLMs) are expensive to deploy. Parameter sharing offers a possible path towards reducing their size and cost, but its effectiveness in modern LLMs remains fairly limited. In this work, we revisit "layer tying" as form of parameter sharing in Transformers, and introduce novel methods for converting existing LLMs into smaller "Recursive Transformers" that share parameters across layers, with minimal loss of performance. Here, our Recursive Transformers are efficiently initialized from standard pretrained Transformers, but only use a single block of unique layers that is then repeated multiple times in a loop. We further improve performance by introducing Relaxed Recursive Transformers that add flexibility to the layer tying constraint via depth-wise low-rank adaptation (LoRA) modules, yet still preserve the compactness of the overall model. We show that our recursive models (e.g., recursive Gemma 1B) outperform both similar-sized vanilla pretrained models (such as TinyLlama 1.1B and Pythia 1B) and knowledge distillation baselines -- and can even recover most of the performance of the original "full-size" model (e.g., Gemma 2B with no shared parameters). Finally, we propose Continuous Depth-wise Batching, a promising new inference paradigm enabled by the Recursive Transformer when paired with early exiting. In a theoretical analysis, we show that this has the potential to lead to significant (2-3x) gains in inference throughput. |
|
2024-10-29T00:00:00 | 2410.01968 | Bi-Level Motion Imitation for Humanoid Robots | [
"Wenshuai Zhao",
"Yi Zhao",
"Joni Pajarinen",
"Michael Muehlebach"
] | Imitation learning from human motion capture (MoCap) data provides a promising way to train humanoid robots. However, due to differences in morphology, such as varying degrees of joint freedom and force limits, exact replication of human behaviors may not be feasible for humanoid robots. Consequently, incorporating physically infeasible MoCap data in training datasets can adversely affect the performance of the robot policy. To address this issue, we propose a bi-level optimization-based imitation learning framework that alternates between optimizing both the robot policy and the target MoCap data. Specifically, we first develop a generative latent dynamics model using a novel self-consistent auto-encoder, which learns sparse and structured motion representations while capturing desired motion patterns in the dataset. The dynamics model is then utilized to generate reference motions while the latent representation regularizes the bi-level motion imitation process. Simulations conducted with a realistic model of a humanoid robot demonstrate that our method enhances the robot policy by modifying reference motions to be physically consistent. |
|
2024-10-29T00:00:00 | 2410.21271 | EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation | [
"Shih-Yang Liu",
"Huck Yang",
"Chein-Yi Wang",
"Nai Chit Fung",
"Hongxu Yin",
"Charbel Sakr",
"Saurav Muralidharan",
"Kwang-Ting Cheng",
"Jan Kautz",
"Yu-Chiang Frank Wang",
"Pavlo Molchanov",
"Min-Hung Chen"
] | In this work, we re-formulate the model compression problem into the customized compensation problem: Given a compressed model, we aim to introduce residual low-rank paths to compensate for compression errors under customized requirements from users (e.g., tasks, compression ratios), resulting in greater flexibility in adjusting overall capacity without being constrained by specific compression formats. However, naively applying SVD to derive residual paths causes suboptimal utilization of the low-rank representation capacity. Instead, we propose Training-free Eigenspace Low-Rank Approximation (EoRA), a method that directly minimizes compression-induced errors without requiring gradient-based training, achieving fast optimization in minutes using a small amount of calibration data. EoRA projects compression errors into the eigenspace of input activations, leveraging eigenvalues to effectively prioritize the reconstruction of high-importance error components. Moreover, EoRA can be seamlessly integrated with fine-tuning and quantization to further improve effectiveness and efficiency. EoRA consistently outperforms previous methods in compensating errors for compressed LLaMA2/3 models on various tasks, such as language generation, commonsense reasoning, and math reasoning tasks (e.g., 31.31%/12.88% and 9.69% improvements on ARC-Easy/ARC-Challenge and MathQA when compensating LLaMA3-8B that is quantized to 4-bit and pruned to 2:4 sparsity). EoRA offers a scalable, training-free solution to compensate for compression errors, making it a powerful tool to deploy LLMs in various capacity and efficiency requirements. |
|
2024-10-30T00:00:00 | 2410.22304 | Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning | [
"Yihe Deng",
"Paul Mineiro"
] | Mathematical reasoning is a crucial capability for Large Language Models (LLMs), yet generating detailed and accurate reasoning traces remains a significant challenge. This paper introduces a novel approach to produce high-quality reasoning traces for LLM fine-tuning using online learning Flows. Our method employs an incremental output production Flow, where component LLMs collaboratively construct solutions through iterative communication. We train the Flow using online Direct Preference Optimization (DPO) learning with rollouts, generating DPO pairs for each training example and updating models in real-time. We directly compare the quality of reasoning traces generated by our method with those produced through direct model inference, demonstrating the effectiveness of our approach in improving LLM performance in mathematical reasoning tasks. |
|
2024-10-30T00:00:00 | 2410.21465 | ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference | [
"Hanshi Sun",
"Li-Wen Chang",
"Wenlei Bao",
"Size Zheng",
"Ningxin Zheng",
"Xin Liu",
"Harry Dong",
"Yuejie Chi",
"Beidi Chen"
] | https://github.com/bytedance/ShadowKV | With the widespread deployment of long-context large language models (LLMs), there has been a growing demand for efficient support of high-throughput inference. However, as the key-value (KV) cache expands with the sequence length, the increasing memory footprint and the need to access it for each token generation both result in low throughput when serving long-context LLMs. While various dynamic sparse attention methods have been proposed to speed up inference while maintaining generation quality, they either fail to sufficiently reduce GPU memory consumption or introduce significant decoding latency by offloading the KV cache to the CPU. We present ShadowKV, a high-throughput long-context LLM inference system that stores the low-rank key cache and offloads the value cache to reduce the memory footprint for larger batch sizes and longer sequences. To minimize decoding latency, ShadowKV employs an accurate KV selection strategy that reconstructs minimal sparse KV pairs on-the-fly. By evaluating ShadowKV on a broad range of benchmarks, including RULER, LongBench, and Needle In A Haystack, and models like Llama-3.1-8B, Llama-3-8B-1M, GLM-4-9B-1M, Yi-9B-200K, Phi-3-Mini-128K, and Qwen2-7B-128K, we demonstrate that it can support up to 6times larger batch sizes and boost throughput by up to 3.04times on an A100 GPU without sacrificing accuracy, even surpassing the performance achievable with infinite batch size under the assumption of infinite GPU memory. The code is available at https://github.com/bytedance/ShadowKV. |
2024-10-30T00:00:00 | 2410.21845 | Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning | [
"Jianlan Luo",
"Charles Xu",
"Jeffrey Wu",
"Sergey Levine"
] | Reinforcement learning (RL) holds great promise for enabling autonomous acquisition of complex robotic manipulation skills, but realizing this potential in real-world settings has been challenging. We present a human-in-the-loop vision-based RL system that demonstrates impressive performance on a diverse set of dexterous manipulation tasks, including dynamic manipulation, precision assembly, and dual-arm coordination. Our approach integrates demonstrations and human corrections, efficient RL algorithms, and other system-level design choices to learn policies that achieve near-perfect success rates and fast cycle times within just 1 to 2.5 hours of training. We show that our method significantly outperforms imitation learning baselines and prior RL approaches, with an average 2x improvement in success rate and 1.8x faster execution. Through extensive experiments and analysis, we provide insights into the effectiveness of our approach, demonstrating how it learns robust, adaptive policies for both reactive and predictive control strategies. Our results suggest that RL can indeed learn a wide range of complex vision-based manipulation policies directly in the real world within practical training times. We hope this work will inspire a new generation of learned robotic manipulation techniques, benefiting both industrial applications and research advancements. Videos and code are available at our project website https://hil-serl.github.io/. |
|
2024-10-30T00:00:00 | 2410.21411 | SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization | [
"Wanhua Li",
"Zibin Meng",
"Jiawei Zhou",
"Donglai Wei",
"Chuang Gan",
"Hanspeter Pfister"
] | https://github.com/Mengzibin/SocialGPT | Social relation reasoning aims to identify relation categories such as friends, spouses, and colleagues from images. While current methods adopt the paradigm of training a dedicated network end-to-end using labeled image data, they are limited in terms of generalizability and interpretability. To address these issues, we first present a simple yet well-crafted framework named {\name}, which combines the perception capability of Vision Foundation Models (VFMs) and the reasoning capability of Large Language Models (LLMs) within a modular framework, providing a strong baseline for social relation recognition. Specifically, we instruct VFMs to translate image content into a textual social story, and then utilize LLMs for text-based reasoning. {\name} introduces systematic design principles to adapt VFMs and LLMs separately and bridge their gaps. Without additional model training, it achieves competitive zero-shot results on two databases while offering interpretable answers, as LLMs can generate language-based explanations for the decisions. The manual prompt design process for LLMs at the reasoning phase is tedious and an automated prompt optimization method is desired. As we essentially convert a visual classification task into a generative task of LLMs, automatic prompt optimization encounters a unique long prompt optimization issue. To address this issue, we further propose the Greedy Segment Prompt Optimization (GSPO), which performs a greedy search by utilizing gradient information at the segment level. Experimental results show that GSPO significantly improves performance, and our method also generalizes to different image styles. The code is available at https://github.com/Mengzibin/SocialGPT. |
2024-10-30T00:00:00 | 2410.22325 | Robots Pre-train Robots: Manipulation-Centric Robotic Representation from Large-Scale Robot Dataset | [
"Guangqi Jiang",
"Yifei Sun",
"Tao Huang",
"Huanyu Li",
"Yongyuan Liang",
"Huazhe Xu"
] | The pre-training of visual representations has enhanced the efficiency of robot learning. Due to the lack of large-scale in-domain robotic datasets, prior works utilize in-the-wild human videos to pre-train robotic visual representation. Despite their promising results, representations from human videos are inevitably subject to distribution shifts and lack the dynamics information crucial for task completion. We first evaluate various pre-trained representations in terms of their correlation to the downstream robotic manipulation tasks (i.e., manipulation centricity). Interestingly, we find that the "manipulation centricity" is a strong indicator of success rates when applied to downstream tasks. Drawing from these findings, we propose Manipulation Centric Representation (MCR), a foundation representation learning framework capturing both visual features and the dynamics information such as actions and proprioceptions of manipulation tasks to improve manipulation centricity. Specifically, we pre-train a visual encoder on the DROID robotic dataset and leverage motion-relevant data such as robot proprioceptive states and actions. We introduce a novel contrastive loss that aligns visual observations with the robot's proprioceptive state-action dynamics, combined with a behavior cloning (BC)-like actor loss to predict actions during pre-training, along with a time contrastive loss. Empirical results across 4 simulation domains with 20 tasks verify that MCR outperforms the strongest baseline method by 14.8%. Moreover, MCR boosts the performance of data-efficient learning with a UR5e arm on 3 real-world tasks by 76.9%. Project website: https://robots-pretrain-robots.github.io/. |
|
2024-10-30T00:00:00 | 2410.20424 | AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions | [
"Ziming Li",
"Qianbo Zang",
"David Ma",
"Jiawei Guo",
"Tuney Zheng",
"Minghao Liu",
"Xinyao Niu",
"Yue Wang",
"Jian Yang",
"Jiaheng Liu",
"Wanjun Zhong",
"Wangchunshu Zhou",
"Wenhao Huang",
"Ge Zhang"
] | Data science tasks involving tabular data present complex challenges that require sophisticated problem-solving approaches. We propose AutoKaggle, a powerful and user-centric framework that assists data scientists in completing daily data pipelines through a collaborative multi-agent system. AutoKaggle implements an iterative development process that combines code execution, debugging, and comprehensive unit testing to ensure code correctness and logic consistency. The framework offers highly customizable workflows, allowing users to intervene at each phase, thus integrating automated intelligence with human expertise. Our universal data science toolkit, comprising validated functions for data cleaning, feature engineering, and modeling, forms the foundation of this solution, enhancing productivity by streamlining common tasks. We selected 8 Kaggle competitions to simulate data processing workflows in real-world application scenarios. Evaluation results demonstrate that AutoKaggle achieves a validation submission rate of 0.85 and a comprehensive score of 0.82 in typical data science pipelines, fully proving its effectiveness and practicality in handling complex data science tasks. |
|
2024-10-30T00:00:00 | 2410.19609 | OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization | [
"Hongliang He",
"Wenlin Yao",
"Kaixin Ma",
"Wenhao Yu",
"Hongming Zhang",
"Tianqing Fang",
"Zhenzhong Lan",
"Dong Yu"
] | The rapid development of large language and multimodal models has sparked significant interest in using proprietary models, such as GPT-4o, to develop autonomous agents capable of handling real-world scenarios like web navigation. Although recent open-source efforts have tried to equip agents with the ability to explore environments and continuously improve over time, they are building text-only agents in synthetic environments where the reward signals are clearly defined. Such agents struggle to generalize to realistic settings that require multimodal perception abilities and lack ground-truth signals. In this paper, we introduce an open-source framework designed to facilitate the development of multimodal web agent that can autonomously conduct real-world exploration and improve itself. We first train the base model with imitation learning to gain the basic abilities. We then let the agent explore the open web and collect feedback on its trajectories. After that, it further improves its policy by learning from well-performing trajectories judged by another general-purpose model. This exploration-feedback-optimization cycle can continue for several iterations. Experimental results show that our web agent successfully improves itself after each iteration, demonstrating strong performance across multiple test sets. |
|
2024-10-30T00:00:00 | 2410.18057 | CLEAR: Character Unlearning in Textual and Visual Modalities | [
"Alexey Dontsov",
"Dmitrii Korzh",
"Alexey Zhavoronkin",
"Boris Mikheev",
"Denis Bobkov",
"Aibek Alanov",
"Oleg Y. Rogov",
"Ivan Oseledets",
"Elena Tutubalina"
] | Machine Unlearning (MU) is critical for enhancing privacy and security in deep learning models, particularly in large multimodal language models (MLLMs), by removing specific private or hazardous information. While MU has made significant progress in textual and visual modalities, multimodal unlearning (MMU) remains significantly underexplored, partially due to the absence of a suitable open-source benchmark. To address this, we introduce CLEAR, a new benchmark designed to evaluate MMU methods. CLEAR contains 200 fictitious individuals and 3,700 images linked with corresponding question-answer pairs, enabling a thorough evaluation across modalities. We assess 10 MU methods, adapting them for MMU, and highlight new challenges specific to multimodal forgetting. We also demonstrate that simple ell_1 regularization on LoRA weights significantly mitigates catastrophic forgetting, preserving model performance on retained data. The dataset is available at https://huggingface.co/datasets/therem/CLEAR |
|
2024-10-30T00:00:00 | 2410.21333 | Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse | [
"Ryan Liu",
"Jiayi Geng",
"Addison J. Wu",
"Ilia Sucholutsky",
"Tania Lombrozo",
"Thomas L. Griffiths"
] | Chain-of-thought (CoT) prompting has become a widely used strategy for working with large language and multimodal models. While CoT has been shown to improve performance across many tasks, determining the settings in which it is effective remains an ongoing effort. In particular, it is still an open question in what settings CoT systematically reduces model performance. In this paper, we seek to identify the characteristics of tasks where CoT reduces performance by drawing inspiration from cognitive psychology, looking at cases where (i) verbal thinking or deliberation hurts performance in humans, and (ii) the constraints governing human performance generalize to language models. Three such cases are implicit statistical learning, visual recognition, and classifying with patterns containing exceptions. In extensive experiments across all three settings, we find that a diverse collection of state-of-the-art models exhibit significant drop-offs in performance (e.g., up to 36.3% absolute accuracy for OpenAI o1-preview compared to GPT-4o) when using inference-time reasoning compared to zero-shot counterparts. We also identify three tasks that satisfy condition (i) but not (ii), and find that while verbal thinking reduces human performance in these tasks, CoT retains or increases model performance. Overall, our results show that while there is not an exact parallel between the cognitive processes of models and those of humans, considering cases where thinking has negative consequences for human performance can help us identify settings where it negatively impacts models. By connecting the literature on human deliberation with evaluations of CoT, we offer a new tool that can be used in understanding the impact of prompt choices and inference-time reasoning. |
|
2024-10-30T00:00:00 | 2410.20088 | RARe: Retrieval Augmented Retrieval with In-Context Examples | [
"Atula Tejaswi",
"Yoonsang Lee",
"Sujay Sanghavi",
"Eunsol Choi"
] | We investigate whether in-context examples, widely used in decoder-only language models (LLMs), can improve embedding model performance in retrieval tasks. Unlike in LLMs, naively prepending in-context examples (query-document pairs) to the target query at inference time does not work out of the box. We introduce a simple approach to enable retrievers to use in-context examples. Our approach, RARe, finetunes a pre-trained model with in-context examples whose query is semantically similar to the target query. This can be applied to adapt various base architectures (i.e., decoder-only language models, retriever models) and consistently achieves performance gains of up to +2.72% nDCG across various open-domain retrieval datasets (BeIR, RAR-b). In particular, we find RARe exhibits stronger out-of-domain generalization compared to models using queries without in-context examples, similar to what is seen for in-context learning in LLMs. We further provide analysis on the design choices of in-context example augmentation and lay the foundation for future work in this space. |
|
2024-10-30T00:00:00 | 2410.21242 | Zero-Shot Dense Retrieval with Embeddings from Relevance Feedback | [
"Nour Jedidi",
"Yung-Sung Chuang",
"Leslie Shing",
"James Glass"
] | Building effective dense retrieval systems remains difficult when relevance supervision is not available. Recent work has looked to overcome this challenge by using a Large Language Model (LLM) to generate hypothetical documents that can be used to find the closest real document. However, this approach relies solely on the LLM to have domain-specific knowledge relevant to the query, which may not be practical. Furthermore, generating hypothetical documents can be inefficient as it requires the LLM to generate a large number of tokens for each query. To address these challenges, we introduce Real Document Embeddings from Relevance Feedback (ReDE-RF). Inspired by relevance feedback, ReDE-RF proposes to re-frame hypothetical document generation as a relevance estimation task, using an LLM to select which documents should be used for nearest neighbor search. Through this re-framing, the LLM no longer needs domain-specific knowledge but only needs to judge what is relevant. Additionally, relevance estimation only requires the LLM to output a single token, thereby improving search latency. Our experiments show that ReDE-RF consistently surpasses state-of-the-art zero-shot dense retrieval methods across a wide range of low-resource retrieval datasets while also making significant improvements in latency per-query. |
|
2024-10-30T00:00:00 | 2410.20305 | Accelerating Direct Preference Optimization with Prefix Sharing | [
"Franklin Wang",
"Sumanth Hegde"
] | https://github.com/frankxwang/dpo-prefix-sharing | Offline paired preference optimization algorithms have become a popular approach for fine-tuning on preference data, outperforming traditional supervised fine-tuning in various tasks. However, traditional implementations often involve redundant computations, especially for tasks with long shared prompts. We introduce prefix sharing for preference tuning, a novel technique that processes chosen and rejected responses as one sequence with a shared prefix. To prevent cross-response contamination, we use a custom block-sparse attention mask. Our method achieves 1.1-1.5times improvement in training throughput on popular DPO datasets, without any effect on convergence. When combined with sequence packing, we observe consistent 1.3-1.6times speedups, benefiting even datasets with smaller sequence lengths. While we focus on Direct Preference Optimization (DPO), our approach is applicable to other paired preference tuning methods. By enhancing computational efficiency, our work contributes to making preference-based fine-tuning more accessible for a wider range of applications and model sizes. We open-source our code at https://github.com/frankxwang/dpo-prefix-sharing. |
2024-10-30T00:00:00 | 2410.19482 | Measuring memorization through probabilistic discoverable extraction | [
"Jamie Hayes",
"Marika Swanberg",
"Harsh Chaudhari",
"Itay Yona",
"Ilia Shumailov"
] | Large language models (LLMs) are susceptible to memorizing training data, raising concerns due to the potential extraction of sensitive information. Current methods to measure memorization rates of LLMs, primarily discoverable extraction (Carlini et al., 2022), rely on single-sequence greedy sampling, potentially underestimating the true extent of memorization. This paper introduces a probabilistic relaxation of discoverable extraction that quantifies the probability of extracting a target sequence within a set of generated samples, considering various sampling schemes and multiple attempts. This approach addresses the limitations of reporting memorization rates through discoverable extraction by accounting for the probabilistic nature of LLMs and user interaction patterns. Our experiments demonstrate that this probabilistic measure can reveal cases of higher memorization rates compared to rates found through discoverable extraction. We further investigate the impact of different sampling schemes on extractability, providing a more comprehensive and realistic assessment of LLM memorization and its associated risks. Our contributions include a new probabilistic memorization definition, empirical evidence of its effectiveness, and a thorough evaluation across different models, sizes, sampling schemes, and training data repetitions. |
|
2024-10-30T00:00:00 | 2410.21647 | Can Language Models Replace Programmers? REPOCOD Says 'Not Yet' | [
"Shanchao Liang",
"Yiran Hu",
"Nan Jiang",
"Lin Tan"
] | Large language models (LLMs) have shown remarkable ability in code generation with more than 90 pass@1 in solving Python coding problems in HumanEval and MBPP. Such high accuracy leads to the question: can LLMs replace human programmers? Existing manual crafted, simple, or single-line code generation benchmarks cannot answer this question due to their gap with real-world software development. To answer this question, we propose REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. In our evaluations on ten LLMs, none of the models can achieve more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development. |
|
2024-10-30T00:00:00 | 2410.22330 | Task Vectors are Cross-Modal | [
"Grace Luo",
"Trevor Darrell",
"Amir Bar"
] | We investigate the internal representations of vision-and-language models (VLMs) and how they encode task representations. We consider tasks specified through examples or instructions, using either text or image inputs. Surprisingly, we find that conceptually similar tasks are mapped to similar task vector representations, regardless of how they are specified. Our findings suggest that to output answers, tokens in VLMs undergo three distinct phases: input, task, and answer, a process which is consistent across different modalities and specifications. The task vectors we identify in VLMs are general enough to be derived in one modality (e.g., text) and transferred to another (e.g., image). Additionally, we find that ensembling exemplar and instruction based task vectors produce better task representations. Taken together, these insights shed light on the underlying mechanisms of VLMs, particularly their ability to represent tasks in a shared manner across different modalities and task specifications. Project page: https://task-vectors-are-cross-modal.github.io. |
|
2024-10-31T00:00:00 | 2410.20050 | AutoMIR: Effective Zero-Shot Medical Information Retrieval without Relevance Labels | [
"Lei Li",
"Xiangxu Zhang",
"Xiao Zhou",
"Zheng Liu"
] | https://github.com/CMIRB-benchmark/CMIRB | Medical information retrieval (MIR) is essential for retrieving relevant medical knowledge from diverse sources, including electronic health records, scientific literature, and medical databases. However, achieving effective zero-shot dense retrieval in the medical domain poses substantial challenges due to the lack of relevance-labeled data. In this paper, we introduce a novel approach called Self-Learning Hypothetical Document Embeddings (SL-HyDE) to tackle this issue. SL-HyDE leverages large language models (LLMs) as generators to generate hypothetical documents based on a given query. These generated documents encapsulate key medical context, guiding a dense retriever in identifying the most relevant documents. The self-learning framework progressively refines both pseudo-document generation and retrieval, utilizing unlabeled medical corpora without requiring any relevance-labeled data. Additionally, we present the Chinese Medical Information Retrieval Benchmark (CMIRB), a comprehensive evaluation framework grounded in real-world medical scenarios, encompassing five tasks and ten datasets. By benchmarking ten models on CMIRB, we establish a rigorous standard for evaluating medical information retrieval systems. Experimental results demonstrate that SL-HyDE significantly surpasses existing methods in retrieval accuracy while showcasing strong generalization and scalability across various LLM and retriever configurations. CMIRB data and evaluation code are publicly available at: https://github.com/CMIRB-benchmark/CMIRB. |
2024-10-31T00:00:00 | 2410.23090 | CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation | [
"Yiruo Cheng",
"Kelong Mao",
"Ziliang Zhao",
"Guanting Dong",
"Hongjin Qian",
"Yongkang Wu",
"Tetsuya Sakai",
"Ji-Rong Wen",
"Zhicheng Dou"
] | Retrieval-Augmented Generation (RAG) has become a powerful paradigm for enhancing large language models (LLMs) through external knowledge retrieval. Despite its widespread attention, existing academic research predominantly focuses on single-turn RAG, leaving a significant gap in addressing the complexities of multi-turn conversations found in real-world applications. To bridge this gap, we introduce CORAL, a large-scale benchmark designed to assess RAG systems in realistic multi-turn conversational settings. CORAL includes diverse information-seeking conversations automatically derived from Wikipedia and tackles key challenges such as open-domain coverage, knowledge intensity, free-form responses, and topic shifts. It supports three core tasks of conversational RAG: passage retrieval, response generation, and citation labeling. We propose a unified framework to standardize various conversational RAG methods and conduct a comprehensive evaluation of these methods on CORAL, demonstrating substantial opportunities for improving existing approaches. |
|
2024-10-31T00:00:00 | 2410.22391 | A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks | [
"Thomas Schmied",
"Thomas Adler",
"Vihang Patil",
"Maximilian Beck",
"Korbinian Pöppel",
"Johannes Brandstetter",
"Günter Klambauer",
"Razvan Pascanu",
"Sepp Hochreiter"
] | In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which result in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed. |