We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.
The model is fine-tuned on MeetingBank and QMSum datasets.