Datasets:

Modalities:
Text
Languages:
Spanish
Libraries:
Datasets
License:
parquet-converter commited on
Commit
d6ae57e
1 Parent(s): a0f3ea7

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,55 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
- *.model filter=lfs diff=lfs merge=lfs -text
14
- *.msgpack filter=lfs diff=lfs merge=lfs -text
15
- *.npy filter=lfs diff=lfs merge=lfs -text
16
- *.npz filter=lfs diff=lfs merge=lfs -text
17
- *.onnx filter=lfs diff=lfs merge=lfs -text
18
- *.ot filter=lfs diff=lfs merge=lfs -text
19
- *.parquet filter=lfs diff=lfs merge=lfs -text
20
- *.pb filter=lfs diff=lfs merge=lfs -text
21
- *.pickle filter=lfs diff=lfs merge=lfs -text
22
- *.pkl filter=lfs diff=lfs merge=lfs -text
23
- *.pt filter=lfs diff=lfs merge=lfs -text
24
- *.pth filter=lfs diff=lfs merge=lfs -text
25
- *.rar filter=lfs diff=lfs merge=lfs -text
26
- *.safetensors filter=lfs diff=lfs merge=lfs -text
27
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
- *.tar.* filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- # Audio files - uncompressed
37
- *.pcm filter=lfs diff=lfs merge=lfs -text
38
- *.sam filter=lfs diff=lfs merge=lfs -text
39
- *.raw filter=lfs diff=lfs merge=lfs -text
40
- # Audio files - compressed
41
- *.aac filter=lfs diff=lfs merge=lfs -text
42
- *.flac filter=lfs diff=lfs merge=lfs -text
43
- *.mp3 filter=lfs diff=lfs merge=lfs -text
44
- *.ogg filter=lfs diff=lfs merge=lfs -text
45
- *.wav filter=lfs diff=lfs merge=lfs -text
46
- # Image files - uncompressed
47
- *.bmp filter=lfs diff=lfs merge=lfs -text
48
- *.gif filter=lfs diff=lfs merge=lfs -text
49
- *.png filter=lfs diff=lfs merge=lfs -text
50
- *.tiff filter=lfs diff=lfs merge=lfs -text
51
- # Image files - compressed
52
- *.jpg filter=lfs diff=lfs merge=lfs -text
53
- *.jpeg filter=lfs diff=lfs merge=lfs -text
54
- *.webp filter=lfs diff=lfs merge=lfs -text
55
- hftrain_en.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.gitignore DELETED
@@ -1,3 +0,0 @@
1
-
2
- hftrain_en.json
3
- hfeval_es.json
 
 
 
 
README.md DELETED
@@ -1,145 +0,0 @@
1
- ---
2
- YAML tags:
3
- annotations_creators:
4
- - automatically-generated
5
- language_creators:
6
- - found
7
- language:
8
- - es
9
- license:
10
- - cc-by-sa-3.0
11
- multilinguality:
12
- - monolingual
13
- pretty_name: wikicat_esv2
14
- size_categories:
15
- - unknown
16
- source_datasets: []
17
- task_categories:
18
- - text-classification
19
- task_ids:
20
- - multi-class-classification
21
- ---
22
-
23
- # WikiCAT_ca: Spanish Text Classification dataset
24
-
25
-
26
- ## Dataset Description
27
-
28
- - **Paper:**
29
-
30
- - **Point of Contact:** [email protected]
31
-
32
-
33
- **Repository**
34
-
35
-
36
-
37
-
38
- ### Dataset Summary
39
-
40
- WikiCAT_ca is a Spanish corpus for thematic Text Classification tasks. It is created automatically from Wikipedia and Wikidata sources, and contains 8401 articles from the Viquipedia classified under 12 different categories.
41
-
42
- This dataset was developed by BSC TeMU as part of the PlanTL project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
43
-
44
- ### Supported Tasks and Leaderboards
45
-
46
- Text classification, Language Model
47
-
48
- ### Languages
49
-
50
- ES- Spanish
51
-
52
- ## Dataset Structure
53
-
54
- ### Data Instances
55
-
56
- Two json files, one for each split.
57
-
58
- ### Data Fields
59
-
60
- We used a simple model with the article text and associated labels, without further metadata.
61
-
62
- #### Example:
63
-
64
- <pre>
65
- {'sentence': 'La economía de Reunión se ha basado tradicionalmente en la agricultura. La caña de azúcar ha sido el cultivo principal durante más de un siglo, y en algunos años representa el 85% de las exportaciones. El gobierno ha estado impulsando el desarrollo de una industria turística para aliviar el alto desempleo, que representa más del 40% de la fuerza laboral.(...) El PIB total de la isla fue de 18.800 millones de dólares EE.UU. en 2007., 'label': 'Economía'}
66
-
67
-
68
- </pre>
69
-
70
- #### Labels
71
-
72
- 'Religión', 'Entretenimiento', 'Música', 'Ciencia_y_Tecnología', 'Política', 'Economía', 'Matemáticas', 'Humanidades', 'Deporte', 'Derecho', 'Historia', 'Filosofía'
73
-
74
- ### Data Splits
75
-
76
- * hfeval_esv5.json: 1681 label-document pairs
77
- * hftrain_esv5.json: 6716 label-document pairs
78
-
79
-
80
- ## Dataset Creation
81
-
82
- ### Methodology
83
-
84
- La páginas de "Categoría" representan los temas.
85
- para cada tema, extraemos las páginas asociadas a ese primer nivel de la jerarquía, y utilizamos el resúmen ("summary") como texto representativo.
86
-
87
- ### Curation Rationale
88
-
89
-
90
-
91
- ### Source Data
92
-
93
- #### Initial Data Collection and Normalization
94
-
95
- The source data are thematic categories in the different Wikipedias
96
-
97
- #### Who are the source language producers?
98
-
99
-
100
- ### Annotations
101
-
102
- #### Annotation process
103
- Automatic annotation
104
-
105
- #### Who are the annotators?
106
-
107
- [N/A]
108
-
109
- ### Personal and Sensitive Information
110
-
111
- No personal or sensitive information included.
112
-
113
- ## Considerations for Using the Data
114
-
115
- ### Social Impact of Dataset
116
-
117
- We hope this corpus contributes to the development of language models in Spanish.
118
-
119
- ### Discussion of Biases
120
-
121
- We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
122
-
123
- ### Other Known Limitations
124
-
125
- [N/A]
126
-
127
- ## Additional Information
128
-
129
- ### Dataset Curators
130
-
131
- Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]).
132
-
133
- For further information, send an email to ([email protected]).
134
-
135
- This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
136
-
137
- ### Licensing Information
138
-
139
- This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
140
-
141
- Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
142
-
143
- ### Contributions
144
-
145
- [N/A]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
WikiCAT_esv2.py DELETED
@@ -1,88 +0,0 @@
1
- # Loading script for the TeCla dataset.
2
- import json
3
- import datasets
4
-
5
- logger = datasets.logging.get_logger(__name__)
6
-
7
- _CITATION = """
8
-
9
- """
10
-
11
- _DESCRIPTION = """
12
- WikiCAT: Text Classification Spanish dataset from the Viquipedia
13
-
14
- """
15
-
16
- _HOMEPAGE = """ """
17
-
18
- # TODO: upload datasets to github
19
- _URL = "https://huggingface.co/datasets/crodri/WikiCAT_esv2/resolve/main/"
20
- _TRAINING_FILE = "hftrain_esv5.json"
21
- _DEV_FILE = "hfeval_esv5.json"
22
- #_TEST_FILE = "test.json"
23
-
24
-
25
- class wikiCAT_esConfig(datasets.BuilderConfig):
26
- """ Builder config for the Topicat dataset """
27
-
28
- def __init__(self, **kwargs):
29
- """BuilderConfig for wikiCAT_es.
30
- Args:
31
- **kwargs: keyword arguments forwarded to super.
32
- """
33
- super(wikiCAT_esConfig, self).__init__(**kwargs)
34
-
35
-
36
- class wikiCAT_es(datasets.GeneratorBasedBuilder):
37
- """ wikiCAT_es Dataset """
38
-
39
- BUILDER_CONFIGS = [
40
- wikiCAT_esConfig(
41
- name="wikiCAT_es",
42
- version=datasets.Version("1.1.0"),
43
- description="wikiCAT_es",
44
- ),
45
- ]
46
-
47
- def _info(self):
48
- return datasets.DatasetInfo(
49
- description=_DESCRIPTION,
50
- features=datasets.Features(
51
- {
52
- "text": datasets.Value("string"),
53
- "label": datasets.features.ClassLabel
54
- (names= ['Religión', 'Entretenimiento', 'Música', 'Ciencia_y_Tecnología', 'Política', 'Economía', 'Matemáticas', 'Humanidades', 'Deporte', 'Derecho', 'Historia', 'Filosofía']
55
- ),
56
- }
57
- ),
58
- homepage=_HOMEPAGE,
59
- citation=_CITATION,
60
- )
61
-
62
- def _split_generators(self, dl_manager):
63
- """Returns SplitGenerators."""
64
- urls_to_download = {
65
- "train": f"{_URL}{_TRAINING_FILE}",
66
- "dev": f"{_URL}{_DEV_FILE}",
67
- # "test": f"{_URL}{_TEST_FILE}",
68
- }
69
- downloaded_files = dl_manager.download_and_extract(urls_to_download)
70
-
71
- return [
72
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
73
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
74
- # datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
75
- ]
76
-
77
- def _generate_examples(self, filepath):
78
- """This function returns the examples in the raw (text) form."""
79
- logger.info("generating examples from = %s", filepath)
80
- with open(filepath, encoding="utf-8") as f:
81
- wikiCAT_es = json.load(f)
82
- for id_, article in enumerate(wikiCAT_es["data"]):
83
- text = article["sentence"]
84
- label = article["label"]
85
- yield id_, {
86
- "text": text,
87
- "label": label,
88
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hfeval_esv5.json DELETED
The diff for this file is too large to render. See raw diff
 
hftrain_esv5.json DELETED
The diff for this file is too large to render. See raw diff
 
wikiCAT_es/wiki_cat_esv2-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17d3fcffbb88dfdb563bec46c3ec468d82afaa2d4d7577bf100250dc9ec33dd1
3
+ size 4159560
wikiCAT_es/wiki_cat_esv2-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d9506093df6ce418fc869d715a8b8b57ffcec1d0bc2d45a96a70911e5e133f7
3
+ size 1078711