Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,51 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
pretty_name: ScienceWikiSmallChunk256
|
5 |
+
tags:
|
6 |
+
- RAG
|
7 |
+
- Retrieval Augmented Generation
|
8 |
+
- Small Chunks
|
9 |
+
- Wikipedia
|
10 |
+
- Science
|
11 |
+
- Scientific
|
12 |
+
- Scientific Wikipedia
|
13 |
+
- Science Wikipedia
|
14 |
+
- 256 tokens
|
15 |
+
license: cc-by-sa-3.0
|
16 |
+
task_categories:
|
17 |
+
- text-generation
|
18 |
+
- text-classification
|
19 |
+
- question-answering
|
20 |
+
---
|
21 |
+
|
22 |
+
# ScienceWikiSmallChunk
|
23 |
+
|
24 |
+
Processed version of millawell/wikipedia_field_of_science, prepared to be used in small context length RAG systems. Chunk length is tokenizer dependent, but each chunk should be around 256 tokens. Longer wikipedia pages have been split into smaller entries, with title added as a prefix.
|
25 |
+
|
26 |
+
There is also 512 tokens dataset available: Laz4rz/wikipedia_science_chunked_small_rag_512
|
27 |
+
|
28 |
+
If you wish to prepare some other chunk length:
|
29 |
+
- use millawell/wikipedia_field_of_science
|
30 |
+
- adapt chunker function:
|
31 |
+
```
|
32 |
+
def chunker_clean(results, example, length=512, approx_token=3, prefix=""):
|
33 |
+
if len(results) == 0:
|
34 |
+
regex_pattern = r'[\n\s]*\n[\n\s]*'
|
35 |
+
example = re.sub(regex_pattern, " ", example).strip().replace(prefix, "")
|
36 |
+
chunk_length = length * approx_token
|
37 |
+
if len(example) > chunk_length:
|
38 |
+
first = example[:chunk_length]
|
39 |
+
chunk = ".".join(first.split(".")[:-1])
|
40 |
+
if len(chunk) == 0:
|
41 |
+
chunk = first
|
42 |
+
rest = example[len(chunk)+1:]
|
43 |
+
results.append(prefix+chunk.strip())
|
44 |
+
if len(rest) > chunk_length:
|
45 |
+
chunker_clean(results, rest.strip(), length=length, approx_token=approx_token, prefix=prefix)
|
46 |
+
else:
|
47 |
+
results.append(prefix+rest.strip())
|
48 |
+
else:
|
49 |
+
results.append(prefix+example.strip())
|
50 |
+
return results
|
51 |
+
```
|