Datasets:
dataset_info: | |
features: | |
- name: _id | |
dtype: string | |
- name: title | |
dtype: string | |
- name: text | |
dtype: string | |
- name: openai | |
sequence: float32 | |
- name: splade | |
sequence: float32 | |
splits: | |
- name: train | |
num_bytes: 12862697823 | |
num_examples: 100000 | |
download_size: 901410913 | |
dataset_size: 12862697823 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
license: apache-2.0 | |
task_categories: | |
- feature-extraction | |
language: | |
- en | |
pretty_name: 'DBPedia SPLADE + OpenAI: 100,000 Vectors' | |
size_categories: | |
- 100K<n<1M | |
# DBPedia SPLADE + OpenAI: 100,000 SPLADE Sparse Vectors + OpenAI Embedding | |
This dataset has both OpenAI and SPLADE vectors for 100,000 DBPedia entries. This adds SPLADE Vectors to [KShivendu/dbpedia-entities-openai-1M/](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) | |
Model id used to make these vectors: | |
```python | |
model_id = "naver/efficient-splade-VI-BT-large-doc" | |
``` | |
For processing the query, use this: | |
```python | |
model_id = "naver/efficient-splade-VI-BT-large-query" | |
``` | |
If you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet: | |
```python | |
import numpy as np | |
vec = np.array(ds[0]['vec']) # where ds is the dataset | |
def get_indices_values(vec): | |
sparse_indices = vec.nonzero() | |
sparse_values = vec[sparse_indices] | |
return sparse_indices, sparse_values | |
``` |