File size: 4,646 Bytes
9c1989c cf696de 9c1989c c65a190 9c1989c cf696de 9c1989c d5452fe 9c1989c d5452fe 9c1989c 923c273 9c1989c cf696de 9c1989c 6b1d8f7 9c1989c 923c273 9c1989c 923c273 9c1989c 9cbea19 9c1989c d5452fe 9c1989c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
---
license: apache-2.0
language:
- en
library_name: sentence-transformers
tags:
- earth science
- climate
- biology
pipeline_tag: sentence-similarity
---
# Model Card for nasa-smd-ibm-st.38m
`nasa-smd-ibm-st.38m` is a Bi-encoder sentence transformer model, that is fine-tuned from distilled version of nasa-smd-ibm-v0.1 encoder model. it is a smaller version of `nasa-smd-ibm-st` with better performance, using fewer parameters (shown below). It's trained with 362 million examples along with a domain-specific dataset of 2.6 million examples from documents curated by NASA Science Mission Directorate (SMD). With this model, we aim to enhance natural language technologies like information retrieval and intelligent search as it applies to SMD NLP applications.
A bigger model is also available here: https://huggingface.co/nasa-impact/nasa-smd-ibm-st-v2
## Model Details
- **Base Encoder Model**: nasa-smd-ibm-v0.1
- **Tokenizer**: Custom
- **Parameters**: 38M
- **Training Strategy**: Sentence Pairs, and score indicating relevancy. The model encodes the two sentence pairs independently and cosine similarity is calculated. the similarity is optimized using the relevance score.
## Training Data
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/fcsd0fEY_EoMA1F_CsEbD.png)
Figure: dataset sources for sentence transformers (362M in total)
Additionally, 2.6M abstract + title pairs collected from NASA SMD documents.
## Training Procedure
- **Framework**: PyTorch 1.9.1
- **sentence-transformers version**: 4.30.2
- **Strategy**: Sentence Pairs
## Evaluation
Following models are evaluated:
1. All-MiniLM-l6-v2 [sentence-transformers/all-MiniLM-L6-v2]
2. BGE-base [BAAI/bge-base-en-v1.5]
3. RoBERTa-base [roberta-base]
4. nasa-smd-ibm-rtvr_v2 [nasa-impact/nasa-smd-ibm-st-v2]
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/0e83srGhSH7-n11tezzHV.png)
Figure: BEIR (https://github.com/beir-cellar/beir) Evaluation Metrics
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/KerkB8PvDDPTcj9JBWtwG.png)
Figure: NASA QA Retrieval Benchmark Evaluation
## Uses
- Information Retreival
- Sentence Similarity Search
For NASA SMD related, scientific usecases.
### Usage
```python
from sentence_transformers import SentenceTransformer, Util
model = SentenceTransformer("nasa-impact/nasa-smd-ibm-st.38m")
input_queries = [
'query: how much protein should a female eat', 'query: summit define']
input_passages = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day.
But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
query_embeddings = model.encode(input_queries)
passage_embeddings = model.encode(input_passages)
print(util.cos_sim(query_embeddings, passage_embeddings))
```
# Note
This Sentence Transformer Model is released in support of the training and evaluation of the encoder language model ["Indus"](https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1).
Accompanying paper can be found here: https://arxiv.org/abs/2405.10725
## Citation
If you find this work useful, please cite using the following bibtex citation:
```bibtex
@misc {nasa-impact_2024,
author = { {NASA-IMPACT} },
title = { nasa-ibm-st.38m (Revision 9c1989c) },
year = 2024,
url = { https://huggingface.co/nasa-impact/nasa-ibm-st.38m },
doi = { 10.57967/hf/1875 },
publisher = { Hugging Face }
}
```
## Attribution
IBM Research
- Aashka Trivedi
- Masayasu Muraoka
- Bishwaranjan Bhattacharjee
- Takuma Udagawa
NASA SMD
- Muthukumaran Ramasubramanian
- Iksha Gurung
- Rahul Ramachandran
- Manil Maskey
- Kaylin Bugbee
- Mike Little
- Elizabeth Fancher
- Lauren Sanders
- Sylvain Costes
- Sergi Blanco-Cuaresma
- Kelly Lockhart
- Thomas Allen
- Felix Grazes
- Megan Ansdell
- Alberto Accomazzi
- Sanaz Vahidinia
- Ryan McGranaghan
- Armin Mehrabian
- Tsendgar Lee
## Disclaimer
This sentence-transformer model is currently in an experimental phase. We are working to improve the model's capabilities and performance, and as we progress, we invite the community to engage with this model, provide feedback, and contribute to its evolution.
|