Muthukumaran
commited on
Commit
•
d249d84
1
Parent(s):
ae80128
Update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,14 @@ pipeline_tag: sentence-similarity
|
|
12 |
|
13 |
# Model Card for nasa-smd-ibm-st-v2
|
14 |
|
15 |
-
`nasa-smd-ibm-st-v2` is a Bi-encoder sentence transformer model, that is fine-tuned from nasa-smd-ibm-v0.1 encoder model. It's trained with 271 million examples along with a domain-specific dataset of 2.6 million examples from documents curated by NASA Science Mission Directorate (SMD). With this model, we aim to enhance natural language technologies like information retrieval and intelligent search as it applies to SMD NLP applications.
|
16 |
|
17 |
## Model Details
|
18 |
-
- **Base Model**: nasa-smd-ibm-v0.1
|
19 |
- **Tokenizer**: Custom
|
20 |
- **Parameters**: 125M
|
21 |
- **Training Strategy**: Sentence Pairs, and score indicating relevancy. The model encodes the two sentence pairs independently and cosine similarity is calculated. the similarity is optimized using the relevance score.
|
22 |
-
|
23 |
## Training Data
|
24 |
|
25 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/ZjcHW24iKsvUYBhoL7eMM.png)
|
|
|
12 |
|
13 |
# Model Card for nasa-smd-ibm-st-v2
|
14 |
|
15 |
+
`nasa-smd-ibm-st-v2` is a Bi-encoder sentence transformer model, that is fine-tuned from nasa-smd-ibm-v0.1 encoder model. it is an updated version of `nasa-smd-ibm-st` with better performance (shown below). It's trained with 271 million examples along with a domain-specific dataset of 2.6 million examples from documents curated by NASA Science Mission Directorate (SMD). With this model, we aim to enhance natural language technologies like information retrieval and intelligent search as it applies to SMD NLP applications.
|
16 |
|
17 |
## Model Details
|
18 |
+
- **Base Encoder Model**: nasa-smd-ibm-v0.1
|
19 |
- **Tokenizer**: Custom
|
20 |
- **Parameters**: 125M
|
21 |
- **Training Strategy**: Sentence Pairs, and score indicating relevancy. The model encodes the two sentence pairs independently and cosine similarity is calculated. the similarity is optimized using the relevance score.
|
22 |
+
|
23 |
## Training Data
|
24 |
|
25 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/ZjcHW24iKsvUYBhoL7eMM.png)
|