Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss
Abstract
Contrastive loss is a powerful approach for representation learning, where larger batch sizes enhance performance by providing more negative samples to better distinguish between similar and dissimilar data. However, scaling batch sizes is constrained by the quadratic growth in GPU memory consumption, primarily due to the full instantiation of the similarity matrix. To address this, we propose a tile-based computation strategy that partitions the contrastive loss calculation into arbitrary small blocks, avoiding full materialization of the similarity matrix. Furthermore, we introduce a multi-level tiling strategy to leverage the hierarchical structure of distributed systems, employing ring-based communication at the GPU level to optimize synchronization and fused kernels at the CUDA core level to reduce I/O overhead. Experimental results show that the proposed method scales batch sizes to unprecedented levels. For instance, it enables contrastive training of a CLIP-ViT-L/14 model with a batch size of 4M or 12M using 8 or 32 A800 80GB without sacrificing any accuracy. Compared to SOTA memory-efficient solutions, it achieves a two-order-of-magnitude reduction in memory while maintaining comparable speed. The code will be made publicly available.
Community
This paper propose a super memory-efficiency calculation method of contrastive loss, which shows that the proposed method scales batch sizes to unprecedented levels. For instance, it enables contrastive training of a CLIP-ViT-L/14 model with a batch size of 4M or 12M using 8 or 32 A800 80GB without sacrificing any accuracy.
With 8×A800, CLIP and OpenCLIP’s memory consumption increases quadratically, while Inf-CL achieves linear growth, reducing memory costs by 78× at a batch size of 256k. At a batch size of 1024k, even with 128 GPUs, previous methods exceed memory limits, whereas Inf-CL reduces memory demand by 281x.
Code Link: https://github.com/DAMO-NLP-SG/Inf-CLIP
Arxiv Link: https://arxiv.org/pdf/2410.17243
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Domino: Eliminating Communication in LLM Training via Generic Tensor Slicing and Overlapping (2024)
- COMET: Towards Partical W4A4KV4 LLMs Serving (2024)
- Liger Kernel: Efficient Triton Kernels for LLM Training (2024)
- Superpipeline: A Universal Approach for Reducing GPU Memory Usage in Large Models (2024)
- Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
and with this amount of batch size, how much has the accuracy increased compared to OpenCLIP?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper