PEFT
English
MrLight commited on
Commit
5d367b7
1 Parent(s): 784f3ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -9
README.md CHANGED
@@ -10,9 +10,13 @@ Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
10
 
11
  This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is 4096.
12
 
 
 
 
 
13
  ## Usage
14
 
15
- Below is an example to encode a query and a document, and then compute their similarity using their embedding.
16
 
17
  ```python
18
  import torch
@@ -31,27 +35,27 @@ def get_model(peft_model_name):
31
  tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
32
  model = get_model('castorini/repllama-v1-7b-lora-passage')
33
 
34
- # Define query and document inputs
35
  query = "What is llama?"
36
  title = "Llama"
37
  passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
38
  query_input = tokenizer(f'query: {query}</s>', return_tensors='pt')
39
- document_input = tokenizer(f'passage: {title} {passage}</s>', return_tensors='pt')
40
 
41
- # Run the model forward to compute embeddings and query-document similarity score
42
  with torch.no_grad():
43
  # compute query embedding
44
  query_outputs = model(**query_input)
45
  query_embedding = query_outputs.last_hidden_state[0][-1]
46
  query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=0)
47
 
48
- # compute document embedding
49
- document_outputs = model(**document_input)
50
- document_embeddings = document_outputs.last_hidden_state[0][-1]
51
- document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=0)
52
 
53
  # compute similarity score
54
- score = torch.dot(query_embedding, document_embeddings)
55
  print(score)
56
 
57
  ```
 
10
 
11
  This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is 4096.
12
 
13
+ ## Training Data
14
+ The model is fine-tuned on the training split of [MS MARCO Passage Ranking](https://microsoft.github.io/msmarco/Datasets) datasets for 1 epoch.
15
+ Please check our paper for details.
16
+
17
  ## Usage
18
 
19
+ Below is an example to encode a query and a passage, and then compute their similarity using their embedding.
20
 
21
  ```python
22
  import torch
 
35
  tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
36
  model = get_model('castorini/repllama-v1-7b-lora-passage')
37
 
38
+ # Define query and passage inputs
39
  query = "What is llama?"
40
  title = "Llama"
41
  passage = "The llama is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era."
42
  query_input = tokenizer(f'query: {query}</s>', return_tensors='pt')
43
+ passage_input = tokenizer(f'passage: {title} {passage}</s>', return_tensors='pt')
44
 
45
+ # Run the model forward to compute embeddings and query-passage similarity score
46
  with torch.no_grad():
47
  # compute query embedding
48
  query_outputs = model(**query_input)
49
  query_embedding = query_outputs.last_hidden_state[0][-1]
50
  query_embedding = torch.nn.functional.normalize(query_embedding, p=2, dim=0)
51
 
52
+ # compute passage embedding
53
+ passage_outputs = model(**passage_input)
54
+ passage_embeddings = passage_outputs.last_hidden_state[0][-1]
55
+ passage_embeddings = torch.nn.functional.normalize(passage_embeddings, p=2, dim=0)
56
 
57
  # compute similarity score
58
+ score = torch.dot(query_embedding, passage_embeddings)
59
  print(score)
60
 
61
  ```