File size: 1,455 Bytes
c8bb738
 
 
b4ecba8
c8bb738
b4ecba8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c8bb738
 
b4ecba8
c8bb738
d522000
b4ecba8
c8bb738
 
 
 
 
 
 
 
fa4f24e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
dataset_info:
  features:
  - name: type_
    dtype: string
  - name: block
    struct:
    - name: html_tag
      dtype: string
    - name: id
      dtype: string
    - name: order
      dtype: int64
    - name: origin_type
      dtype: string
    - name: text
      struct:
      - name: embedding
        sequence: float64
      - name: text
        dtype: string
  splits:
  - name: train
    num_bytes: 2266682282
    num_examples: 260843
  download_size: 2272790159
  dataset_size: 2266682282
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# Dataset Card for "es_indexing_benchmark"

Here is a code on how to pull and index this dataset to elasticsearch:

```python
import datasets
from tqdm import tqdm

from src.store.es.search import ESBaseClient
from src.store.es.model import ESNode

ds = datasets.load_dataset('stellia/es_indexing_benchmark', split='train', ignore_verifications=True)
client = ESBaseClient()


index_name = "tmp_es_index"
nodes = []
for row in tqdm(ds):
    esnode = ESNode(**row)
    esnode.meta.id = esnode.block.id
    nodes.append(esnode)


client.delete_index(index_name)
client.init_index(index_name)

batch_size = 5000
for i in tqdm(range(0, len(nodes), batch_size)):
    client.save(index_name, nodes[i:i+batch_size], refresh=False)
```


Consider empty `~/.cache/huggingface/datasets` with `rm -rf ~/.cache/huggingface/datasets` if you have problem loading the dataset.