Datasets:
How to improve the wikimedia/wikipedia dataset
This is a space for the community to propose/discuss possible future improvements for the next version of the wikimedia/wikipedia dataset.
The dataset loading script is located at: https://huggingface.co/datasets/wikimedia/wikipedia/blob/script/wikipedia.py
Potential axes:
- data integrity: see discussion at Missing important parts of text
- performance: faster and optimized data processing
I have pushed a version of the new Wikipedia script to the "script-html" branch: https://huggingface.co/datasets/wikimedia/wikipedia/tree/script-html
- It works in streaming mode for the moment
Hi Albert, does the version from the script-html branch fix the data integrity issues (like the list from https://huggingface.co/datasets/wikimedia/wikipedia/discussions/59#66100cb0150eb83552bd997a)?
Normally, that is the intention... We are still working on it...
While working with the Wikimedia Enterprise Snapshot API, we have discovered that some Wikipedia articles appear multiple times (for different revisions). We have contacted their support and they confirm that is the case:
With the "Snapshots" dataset some WME clients want multiple article revisions, while others only need the latest article. Our goal is to let clients choose how to handle older revisions.