---
base_model: BAAI/bge-small-en-v1.5
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:60341
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What is the focus of the research conducted by the MHCI x 99P Labs
Capstone Team?
sentences:
- To determine the destination of a given car based on an initial start position
in time, we developed a Markov Model. We then creatively combined DBScan, K-NN,
and XGboost algorithms to achieve accurate dwell time forecasts.
- Transportation networks touch all three pillars of sustainability. They shape
our daily lives by connecting us to work, retail, and recreation; however, a system
that does not connect everyone equitably reproduces social disparities.
- 'Two weeks of digging deep into exploratory, generative research
Written by the MHCI x 99P Labs Capstone TeamEdited by 99P Labs
The MHCI x 99P Labs Capstone Team is part of the Master of Human-Computer Interaction
(MHCI) program at Carnegie Mellon University.'
- source_sentence: What limits are being considered for data quality checks?
sentences:
- Unlike many other Agile teams, we don t do a Retro every sprint, mostly because
we do one-week sprints.
- Our team has been exploring implementing data quality checks into our data platform.
We ve been trying to establish our goals, limits, and expectations, some of which
were discussed in Part 1 of our Data Quality blog posts.
- Literature and Topical ReviewEach team member performed a literature review on
telematics research, identifying its applications, methodologies, and critical
insights.
- source_sentence: What are the potential consequences of not researching before coding?
sentences:
- This indicates a degree of variance in the model s accuracy across different times
and conditions.
- In order to objectively test ourselves on the knowledge we ve gained, we decide
to take a quiz. The quiz contains 50 images of either dogs or cats and we have
to determine which animal the image most closely resembles.
- To reiterate, before even writing any code, it s important to do proper research
into your team s documentation and online resources. A lot of time can be saved
by reusing code that can adapt to your use case instead of starting from scratch
every time.
- source_sentence: What might be the implications of having a performance of 3%?
sentences:
- Then, I will highlight the top three winning projects from each track.
- Channels can be used only by organizations that are invited to the channel and
are invisible to other members of the network. Each channel has a separate blockchain
ledger.
- 3%, only slightly better than the worst-performing model, K Nearest Neighbors.
- source_sentence: In what context is traffic flow theory typically discussed?
sentences:
- As a result, I was familiar with many terms discussed conceptually but I discovered
some of the more official terminology used when discussing traffic flow theory
and network control.
- We called it plus-deltas (+/ ). Seeing the output and outcomes we accomplished
in each session was encouraging and allowed us to acknowledge things we did that
made us successful so we could carry it on to the next session.
- There are different types of projects within C.
model-index:
- name: SentenceTransformer based on BAAI/bge-small-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: 99GPT Finetuning Embedding test 01
type: 99GPT-Finetuning-Embedding-test-01
metrics:
- type: cosine_accuracy
value: 0.9987405541561712
name: Cosine Accuracy
- type: dot_accuracy
value: 0.0011931592204693093
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.9987405541561712
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.9987405541561712
name: Euclidean Accuracy
- type: max_accuracy
value: 0.9987405541561712
name: Max Accuracy
---
# SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("marroyo777/bge-99GPT-v1")
# Run inference
sentences = [
'In what context is traffic flow theory typically discussed?',
'As a result, I was familiar with many terms discussed conceptually but I discovered some of the more official terminology used when discussing traffic flow theory and network control.',
'There are different types of projects within C.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Triplet
* Dataset: `99GPT-Finetuning-Embedding-test-01`
* Evaluated with [TripletEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.9987 |
| dot_accuracy | 0.0012 |
| manhattan_accuracy | 0.9987 |
| euclidean_accuracy | 0.9987 |
| **max_accuracy** | **0.9987** |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 60,341 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details |
Who is being invited to join the initiative?
| Our belief is that the research community will be able to gain access to diverse and real-time data with minimal friction, build exciting innovations and make an impact to Data and AI technologies as well. This is just the first release and we are inviting the research community to join us to build exciting data-driven mobility & energy solutions together.
| Burning it destroys the oil. Once you burn the oil, that particular oil ceases to exist.
|
| What is the main focus of the research conducted for Orbit?
| Orbit holds the culmination of almost a year of research with participants from a wide variety of backgrounds, needs, and jobs to be done.
| So how do you win a hackathon mobility challenge? The SmartRoute team showed two of them.
|
| What role do LLMs play in HRI's strategy?
| We are excited about the potential of JournAI to transform mobility. By harnessing the power of LLMs and other AI technologies, HRI is driving towards a more connected, efficient, and sustainable future.
| This simplified the process for users, who only had to pull and run the docker image to spawn a Jupyterlab app on their machine, open it in their browser, and create a new Pyspark notebook that automatically connected to our spark cluster. Our new workflow allows data science teams to configure their spark jobs and compute resources with options to request memory and CPU from the cluster and customize spark settings.
|
* Loss: [MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 15,086 evaluation samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | What does the text suggest about the balance between creating tools and their practical application?
| From technology to healthcare, these examples underline the importance of the interplay between theory and practice, between creating advanced tools and applying them effectively.
| We found success when leaving the later panels empty as opposed to earlier ones. If we established a clear context and pain point for participants, they were often able to fill in a solution and resolution themselves.
|
| Who are the personas mentioned in the text?
| Our derived data sets are created based on personas that we have identified and their data access needs.
| However there still exists a need to connect the map matched nodes that are outputted from the libraries to specific data points from the V2X data, in order to get the rest of the V2X features in a specific time frame.
|
| Is this the first or second hackathon mentioned?
| Up next is the first of two hackathons we participated in at Ohio State University.
| The team did a great job by targeting a pervasive issue in such an intuitive way.
|
* Loss: [MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters