LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery Paper • 2310.18356 • Published Oct 24, 2023 • 22
LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning Paper • 2305.18403 • Published May 28, 2023 • 2
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence Modeling Paper • 2305.08285 • Published May 15, 2023 • 1
A Comparative Analysis of Task-Agnostic Distillation Methods for Compressing Transformer Language Models Paper • 2310.08797 • Published Oct 13, 2023 • 1