nirantk commited on
Commit
31c0075
1 Parent(s): 2f46487

Add model id information

Browse files
Files changed (1) hide show
  1. README.md +32 -2
README.md CHANGED
@@ -22,7 +22,37 @@ configs:
22
  data_files:
23
  - split: train
24
  path: data/train-*
 
 
 
 
 
 
 
 
25
  ---
26
- # Dataset Card for "dbpedia-entities-efficient-splade-100K"
27
 
28
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
+ license: apache-2.0
26
+ task_categories:
27
+ - feature-extraction
28
+ language:
29
+ - en
30
+ pretty_name: 'DBPedia SPLADE + OpenAI: 100,000 Vectors'
31
+ size_categories:
32
+ - 100K<n<1M
33
  ---
34
+ # DBPedia SPLADE + OpenAI: 10,000 SPLADE Sparse Vectors + OpenAI Embedding
35
 
36
+ This dataset has both OpenAI and SPLADE vectors for 100,000 DBPedia entries. This adds SPLADE Vectors to [KShivendu/dbpedia-entities-openai-1M/](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M)
37
+
38
+ Model id used to make these vectors:
39
+ ```python
40
+ model_id = "naver/efficient-splade-VI-BT-large-doc"
41
+ ```
42
+
43
+ For processing the query, use this:
44
+ ```python
45
+ model_id = "naver/efficient-splade-VI-BT-large-query"
46
+ ```
47
+
48
+ If you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:
49
+
50
+ ```python
51
+ import numpy as np
52
+ vec = np.array(ds[0]['vec']) # where ds is the dataset
53
+
54
+ def get_indices_values(vec):
55
+ sparse_indices = vec.nonzero()
56
+ sparse_values = vec[sparse_indices]
57
+ return sparse_indices, sparse_values
58
+ ```