PiC commited on
Commit
aa21c13
1 Parent(s): 55347ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -58,10 +58,10 @@ task_ids: []
58
 
59
  ### Dataset Summary
60
 
61
- PR is a phrase retrieval task with the goal of finding a phrase $t = 0$ in a given document $\vd$ such that $\vt$ is semantically similar to the query phrase, which is the paraphrase $\vq_1$ provided by annotators.
62
- We release two versions of PR: \textbf{PR-pass} and \textbf{PR-page}, \ie datasets of 3-tuples (query $\vq_1$, target phrase $\vt$, document $\vd$) where $\vd$ is a random 11-sentence passage that contains $\vt$ or an entire Wikipedia page.
63
  While PR-pass contains 28,147 examples, PR-page contains slightly fewer examples (28,098) as we remove those trivial examples whose Wikipedia pages contain exactly the query phrase (in addition to the target phrase).
64
- Both datasets are split into 5K/3K/$\sim$20K for test/dev/train, respectively.
65
 
66
  ### Supported Tasks and Leaderboards
67
 
 
58
 
59
  ### Dataset Summary
60
 
61
+ PR is a phrase retrieval task with the goal of finding a phrase **t** in a given document **d** such that **t** is semantically similar to the query phrase, which is the paraphrase **q**<sup>1</sup> provided by annotators.
62
+ We release two versions of PR: **PR-pass** and **PR-page**, \ie datasets of 3-tuples (query **q**<sup>1</sup>, target phrase **t**, document **d**) where **d** is a random 11-sentence passage that contains **t** or an entire Wikipedia page.
63
  While PR-pass contains 28,147 examples, PR-page contains slightly fewer examples (28,098) as we remove those trivial examples whose Wikipedia pages contain exactly the query phrase (in addition to the target phrase).
64
+ Both datasets are split into 5K/3K/~20K for test/dev/train, respectively.
65
 
66
  ### Supported Tasks and Leaderboards
67