The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Column() changed from object to array in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables dataset = json.load(f) File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load return loads(fp.read(), File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 104615) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__ yield from islice(self.ex_iterable, self.n) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for TvTroper
TvTroper is a public raw dataset on TvTropes.org page.
Dataset Summary
TvTroper is a raw dataset dump consisting of text from at most 651,522 wiki pages (excluding namespaces and date-grouped pages) from tvtropes.org.
Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
- text-classification
- text-generation
Languages
- English
Dataset Structure
All the files are located in jsonl files that has been compressed into a 20GB .zip archive.
Data Instances
["https://tvtropes.org/pmwiki/pmwiki.php/HaruhiSuzumiya/TropesJToN","<!DOCTYPE html>\n\t<html>\n\t\t<head lang=\"en\">\n...<TRUNCATED>"]
Data Fields
There is only 2 fields in the list. URL and content retrieved. Content retrieved may contain errors. If the page does not exist, the 404 error page is scraped.
For the case of 1 specific URL: https://tvtropes.org/pmwiki/pmwiki.php/JustForFun/RedirectLoop
will endlessly redirect to the page.
As such we have used the following html as placeholder for such occurances:
<!DOCTYPE html><html><head lang=\"en\"><title>Error: URL Exceeds maximum allowed redirects.</title></head><body class=\"\"><div>Error: URL Exceeds maximum allowed redirects.</div></body></html>
URLs may not match the final url in which the page was retrieved from. As they may be redirects present while scraping.
Q-Score Distribution
Not Applicable
Data Splits
The jsonl files are split by their namespaces.
Dataset Creation
Curation Rationale
We have curated TvTropes.org as it serves as one of the best resource for common themes, narrative devices, and character archetypes that shape our various stories around the world.
Source Data
Initial Data Collection and Normalization
None. No normalization is performed as this is a raw dump of the dataset.
Who are the source language producers?
The related editors/users of TvTropes.org
Annotations
Annotation process
No Annotations are present.
Who are the annotators?
No human annotators.
Personal and Sensitive Information
We are certain there is no PII included in the dataset.
Considerations for Using the Data
Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content. It may also be useful for other languages depending on your language model.
Discussion of Biases
This dataset contains mainly TV Tropes used in media.
Other Known Limitations
N/A
Additional Information
Dataset Curators
KaraKaraWitch
Licensing Information
Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles.
Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing.
Citation Information
@misc{tvtroper,
title = {TvTroper: Tropes & Others.},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/RyokoExtra/TvTroper}},
}
Name Etymology
N/A
Contributions
- @KaraKaraWitch (Twitter) for gathering this dataset.
- Downloads last month
- 74