Bowen232 commited on
Commit
3c23448
1 Parent(s): 4827ef5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -19,7 +19,7 @@ We released all of our checkpoints used in [LoRA-Flow](https://aclanthology.org/
19
  # Summary
20
  > In this repo, we release LoRA modules and the gate of 7B models trained in our paper in HuggingFace format.
21
  # Introduction
22
- LoRA-Flow provides an efficient way to fuse different LoRA modules. The following picture shows our proposed method, we use layer-wise fusion gates to facilitate dynamic LoRA fusion, which project input hidden states of each layer into fusion weights. For more details, please refer to our paper.
23
  ![1.jpg](https://cdn-uploads.huggingface.co/production/uploads/64d99f6cd7e30889c6c477b4/ifiu1FTHilrmUkD4FKkgV.jpeg)
24
  # Training Data
25
  ## Data used for LoRA modules
 
19
  # Summary
20
  > In this repo, we release LoRA modules and the gate of 7B models trained in our paper in HuggingFace format.
21
  # Introduction
22
+ LoRA-Flow provides an efficient way to fuse different LoRA modules. The following picture shows our proposed method, we use layer-wise fusion gates to facilitate dynamic LoRA fusion, which project input hidden states of each layer into fusion weights. LoRA-flow can be applied into [Llama-7b backbone](https://huggingface.co/meta-llama/Llama-2-7b) . For more details, please refer to our paper.
23
  ![1.jpg](https://cdn-uploads.huggingface.co/production/uploads/64d99f6cd7e30889c6c477b4/ifiu1FTHilrmUkD4FKkgV.jpeg)
24
  # Training Data
25
  ## Data used for LoRA modules