File size: 6,443 Bytes
762668d
 
 
4385b31
 
 
 
 
 
 
 
 
 
 
 
 
 
762668d
 
b026360
4385b31
3c23448
762668d
4827ef5
 
 
b026360
4827ef5
b026360
4827ef5
b026360
4827ef5
 
b026360
4827ef5
 
b058e42
 
 
 
 
 
 
 
 
 
 
6a3a2f7
 
 
 
 
 
68a4d05
5f1ae17
 
6a3a2f7
 
 
 
 
5f1ae17
6a3a2f7
 
 
5f1ae17
6a3a2f7
 
 
 
68a4d05
6a3a2f7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b058e42
68a4d05
5f1ae17
762668d
4385b31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4827ef5
4385b31
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
---
license: mit
---
<div align="center">

<!-- <img src="title.png" alt="LoRA-Flow" width="200"> -->
<!-- ***LORA-Flow*** -->
<b><i style="font-size: 24px;">LORA-Flow</i></b>

LoRAs and fusion gates for our paper

<p align="center">
 <a href="https://aclanthology.org/2024.acl-long.695/">Paper</a><a href="https://github.com/pingbowen23/LoRA-Flow"> Github</a>
</p>
</div>

We released all of our checkpoints used in [LoRA-Flow](https://aclanthology.org/2024.acl-long.695.pdf) which has been accepted to ACL 2024 main conference.
# Summary
>  In this repo, we release LoRA modules and the gate of 7B models trained in our paper in HuggingFace format.
# Introduction
LoRA-Flow provides an efficient way to fuse different LoRA modules. The following picture shows our proposed method, we use layer-wise fusion gates to facilitate dynamic LoRA fusion, which project input hidden states of each layer into fusion weights. LoRA-flow can be applied into [Llama-7b backbone](https://huggingface.co/meta-llama/Llama-2-7b) . For more details, please refer to our paper. 
![1.jpg](https://cdn-uploads.huggingface.co/production/uploads/64d99f6cd7e30889c6c477b4/ifiu1FTHilrmUkD4FKkgV.jpeg)
# Training Data
## Data used for LoRA modules
For the language LoRA modules: we use the 52K training samples from [Okapi](https://aclanthology.org/2023.emnlp-demo.28) for each language, respectively.

For the math LoRA module: we use [Metamath](https://arxiv.org/abs/2309.12284) that is comprised of 395K mathematical problems and the corresponding solutions in English.
 
For the code LoRA module: we use the Magicoder dataset [Magicoder](https://arxiv.org/abs/2312.02120), which consists of 186K code generation problems and the corresponding solutions in English.

## Data used for gates
We use gates to fuse different LoRA modules. We employ few-shot training and have released our training data. For more details, please refer to our GitHub.

# Results
We have released the results for LoRAs and LoRA-Flow

| **Method**            |       | **MGSM (Math)**   |         |         |         | **HumanEval (Code)**   |         |         |         |
|-----------------------|-------|-------------------------------|---------|---------|---------|----------------------------------|---------|---------|---------|
|                       |       | **Zh**  | **Ru**  | **Es**  | **Avg.**| **Zh**       | **Ru**  | **Es**  | **Avg.**|
| **Base Model**         |       | 4.4  | 3.2     | 2.4     | 3.3     | 0.0     | 0.0     | 2.4     | 0.8     |
| **Single LoRA**        | Lang  | 5.2 | 3.6     | 3.6     | 4.1     | 12.2     | 14.0    | 10.4    | 12.2    |
|                       | Task  | 26.8     | 32.8    | 41.2    | 33.6    | 18.3     | 23.2    | 21.9    | 21.1    |
| **LoRA Fusion**        | Avg   | 12.8   | 10.4    | 18.4    | 13.9    | 17.1   | 17.7    | 18.3    | 17.7    |
|                       | LoRA-Hub | 20.8   | 28.4    | 36.8    | 28.7    | 19.5   | 21.3    | 20.1    | 20.3    |
|                       | LoRA-Flow | **33.2**    | **37.6**| **42.0**| **37.6**| **20.7**  | **23.8**| **23.2**| **22.6**|

<table border="1" cellspacing="0" cellpadding="10">
    <caption>
        Evaluation results on MGSM and HumanEval. ‘Lang’ denotes the chat LoRA in the target language and ‘Task’ represents the math or code LoRA trained in English. LoRA fusion methods combine the language LoRA and the task LoRA to accomplish the new task. The best score is highlighted in bold.
    </caption>
    <thead>
        <tr>
            <th rowspan="2">Method</th>
            <th colspan="4">MGSM (Math)</th>
            <th colspan="4">HumanEval (Code)</th>
        </tr>
        <tr>
            <th>Zh</th>
            <th>Ru</th>
            <th>Es</th>
            <th>Avg.</th>
            <th>Zh</th>
            <th>Ru</th>
            <th>Es</th>
            <th>Avg.</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Base Model</td>
            <td>4.4</td>
            <td>3.2</td>
            <td>2.4</td>
            <td>3.3</td>
            <td>0.0</td>
            <td>0.0</td>
            <td>2.4</td>
            <td>0.8</td>
        </tr>
        <tr>
            <td rowspan="2">Single LoRA</td>
            <td>5.2</td>
            <td>3.6</td>
            <td>3.6</td>
            <td>4.1</td>
            <td>12.2</td>
            <td>14.0</td>
            <td>10.4</td>
            <td>12.2</td>
        </tr>
        <tr>
            <td>26.8</td>
            <td>32.8</td>
            <td>41.2</td>
            <td>33.6</td>
            <td>18.3</td>
            <td>23.2</td>
            <td>21.9</td>
            <td>21.1</td>
        </tr>
        <tr>
            <td rowspan="3">LoRA Fusion</td>
            <td>12.8</td>
            <td>10.4</td>
            <td>18.4</td>
            <td>13.9</td>
            <td>17.1</td>
            <td>17.7</td>
            <td>18.3</td>
            <td>17.7</td>
        </tr>
        <tr>
            <td>20.8</td>
            <td>28.4</td>
            <td>36.8</td>
            <td>28.7</td>
            <td>19.5</td>
            <td>21.3</td>
            <td>20.1</td>
            <td>20.3</td>
        </tr>
        <tr>
            <td><strong>33.2</strong></td>
            <td><strong>37.6</strong></td>
            <td><strong>42.0</strong></td>
            <td><strong>37.6</strong></td>
            <td><strong>20.7</strong></td>
            <td><strong>23.8</strong></td>
            <td><strong>23.2</strong></td>
            <td><strong>22.6</strong></td>
        </tr>
    </tbody>
</table>





# Citation
```bibtex
@inproceedings{wang-etal-2024-lora-flow,
    title = "LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks",
    author = "Wang, Hanqing  and
      Ping, Bowen  and
      Wang, Shuo  and
      Han, Xu  and
      Chen, Yun  and
      Liu, Zhiyuan  and
      Sun, Maosong",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.695",
    doi = "10.18653/v1/2024.acl-long.695",
    pages = "12871--12882"
}
```
<!-- [LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks](https://aclanthology.org/2024.acl-long.695)  -->