File size: 1,540 Bytes
75f5ae9
a036362
 
 
 
 
 
 
 
75f5ae9
 
 
 
 
 
 
 
 
 
a036362
75f5ae9
 
a036362
75f5ae9
c714968
 
 
 
 
75f5ae9
 
 
 
 
 
 
 
 
 
335c5fa
 
 
 
d1cc5aa
 
 
335c5fa
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
language:
- en
license: odc-by
size_categories:
- n<1K
task_categories:
- text-generation
- fill-mask
dataset_info:
  features:
  - name: section
    dtype: string
  - name: filename
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: validation
    num_bytes: 134490
    num_examples: 1
  - name: test
    num_bytes: 3845881
    num_examples: 2
  - name: train
    num_bytes: 60701376
    num_examples: 46
  download_size: 64556994
  dataset_size: 64681747
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---

# law books (nougat-small)


A decent chunk of: https://www.survivorlibrary.com/index.php/8-category/173-library-law


<pre>(ki) <font color="#859900"><b></b></font><font color="#2AA198"><b>primerdata-for-LLMs</b></font> python push_dataset_from_text.py /home/pszemraj/Dropbox/programming-projects/primerdata-for-LLMs/utils/output-hf-nougat-space/law -e .md -r BEE-spoke-data/survivorslib-law-books
INFO:__main__:Looking for files with extensions: [&apos;md&apos;]
Processing md files: 100%|███████████████████████████████| 46/46 [00:00&lt;00:00, 778.32it/s]
INFO:__main__:Found 46 text files.
INFO:__main__:Performing train-test split...
INFO:__main__:Performing validation-test split...
INFO:__main__:Train size: 43
INFO:__main__:Validation size: 1
INFO:__main__:Test size: 2
INFO:__main__:Pushing dataset</pre>