PEFT
English
MrLight commited on
Commit
bc17d05
1 Parent(s): 5d367b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -5,7 +5,7 @@ license: llama2
5
 
6
  # RepLLaMA-7B-Passage
7
 
8
- [Fine-Tuning LLaMA for Multi-Stage Text Retrieval](TODO).
9
  Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
10
 
11
  This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is 4096.
@@ -65,5 +65,10 @@ with torch.no_grad():
65
  If you find our paper or models helpful, please consider cite as follows:
66
 
67
  ```
68
- TODO
 
 
 
 
 
69
  ```
 
5
 
6
  # RepLLaMA-7B-Passage
7
 
8
+ [Fine-Tuning LLaMA for Multi-Stage Text Retrieval](https://arxiv.org/abs/2310.08319).
9
  Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
10
 
11
  This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is 4096.
 
65
  If you find our paper or models helpful, please consider cite as follows:
66
 
67
  ```
68
+ @article{rankllama,
69
+ title={Fine-Tuning LLaMA for Multi-Stage Text Retrieval},
70
+ author={Xueguang Ma and Liang Wang and Nan Yang and Furu Wei and Jimmy Lin},
71
+ year={2023},
72
+ journal={arXiv:2310.08319},
73
+ }
74
  ```