Muthukumaran commited on
Commit
488fdde
1 Parent(s): 0c6fde4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -10,9 +10,9 @@ tags:
10
  pipeline_tag: sentence-similarity
11
  ---
12
 
13
- # Model Card for nasa-smd-ibm-v0.1
14
 
15
- `nasa-smd-ibm-st-v2` is improved version of Bi-encoder sentence transformer model (`nasa-smd-ibm-st`), that is fine-tuned from nasa-smd-ibm-v0.1 encoder model. It's trained with 271 million examples along with a domain-specific dataset of 2.6 million examples from documents curated by NASA Science Mission Directorate (SMD). With this model, we aim to enhance natural language technologies like information retrieval and intelligent search as it applies to SMD NLP applications.
16
 
17
  ## Model Details
18
  - **Base Model**: nasa-smd-ibm-v0.1
@@ -41,19 +41,19 @@ Following models are evaluated:
41
  3. RoBERTa-base [roberta-base]
42
  4. nasa-smd-ibm-rtvr_v0.1 [nasa-impact/nasa-smd-ibm-st]
43
 
44
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/QvuEkZJjDGNllRyzl3Oh6.png)
 
45
 
46
  Figure: BEIR Evaluation Metrics
47
 
48
 
49
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/J3iuPWaGp_qTbllPFpchi.png)
50
 
51
  Figure: Retrieval Benchmark Evaluation
52
 
53
  ## Uses
54
- - Information Retrieval
55
  - Sentence Similarity Search
56
- - Retrieval Augmented Generation
57
 
58
  For NASA SMD related, scientific usecases.
59
 
@@ -62,7 +62,7 @@ For NASA SMD related, scientific usecases.
62
  ```python
63
 
64
  from sentence_transformers import SentenceTransformer, util
65
- model = SentenceTransformer('path_to_model')
66
  input_queries = [
67
  'query: how much protein should a female eat', 'query: summit define']
68
  input_passages = [
 
10
  pipeline_tag: sentence-similarity
11
  ---
12
 
13
+ # Model Card for nasa-smd-ibm-st-v2
14
 
15
+ `nasa-smd-ibm-st-v2` is a Bi-encoder sentence transformer model, that is fine-tuned from nasa-smd-ibm-v0.1 encoder model. It's trained with 271 million examples along with a domain-specific dataset of 2.6 million examples from documents curated by NASA Science Mission Directorate (SMD). With this model, we aim to enhance natural language technologies like information retrieval and intelligent search as it applies to SMD NLP applications.
16
 
17
  ## Model Details
18
  - **Base Model**: nasa-smd-ibm-v0.1
 
41
  3. RoBERTa-base [roberta-base]
42
  4. nasa-smd-ibm-rtvr_v0.1 [nasa-impact/nasa-smd-ibm-st]
43
 
44
+
45
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/eGRC1_EGCp5yAIQiM8Gav.png)
46
 
47
  Figure: BEIR Evaluation Metrics
48
 
49
 
50
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/2dTtMDcbEP9I3QkmxGbO4.png)
51
 
52
  Figure: Retrieval Benchmark Evaluation
53
 
54
  ## Uses
55
+ - Information Retreival
56
  - Sentence Similarity Search
 
57
 
58
  For NASA SMD related, scientific usecases.
59
 
 
62
  ```python
63
 
64
  from sentence_transformers import SentenceTransformer, util
65
+ model = SentenceTransformer('path_to_slate_model')
66
  input_queries = [
67
  'query: how much protein should a female eat', 'query: summit define']
68
  input_passages = [