inconsistent data field in github jsonl files
Hello,
A subset of the jsonl files from the RedPajama-Data-1T github have inconsistent schema and is causing problems when using
ds = load_dataset('json', data_files='/path_to/filtered_10f129bfd0af45caa9cd72aa9d863ec5.sampled.jsonl')
For that particular file we can see there are two entries with an additional "symlink_target" field. There are multiple jsonl files affected but not all.
To find the affected lines we can run:
awk '/symlink_target/ {print FNR}' filtered_10f129bfd0af45caa9cd72aa9d863ec5.sampled.jsonl
49052
128101
cat filtered_10f129bfd0af45caa9cd72aa9d863ec5.sampled.jsonl | grep "symlink_target"
{"text": "boilerplate.html", "meta": {"content_hash": "e9054430576a6ebde6c2a8e3b355b6ea", "timestamp": "", "source": "github", "line_count": 1, "max_line_length": 16, "avg_line_length": 16.0, "alnum_prop": 0.9375, "repo_name": "andrewjbaker/authoring-jquery-plugins", "id": "ea3542e94c7508b6b21f8f04fe7df8fbe164595e", "size": "16", "binary": false, "copies": "2", "ref": "refs/heads/master", "path": "index.html", "mode": "40960", "symlink_target": "boilerplate.html", "license": "mit", "language": [{"name": "CSS", "bytes": "56046"}, {"name": "JavaScript", "bytes": "156348"}]}}
{"text": "../main.tex", "meta": {"content_hash": "154f6382d53b1e1068fa1b50ba623823", "timestamp": "", "source": "github", "line_count": 1, "max_line_length": 11, "avg_line_length": 11.0, "alnum_prop": 0.6363636363636364, "repo_name": "ningchi/banyuan-ppt", "id": "f39c56ef3ec9948e7abae46b4dcdbb0a427e9b2d", "size": "11", "binary": false, "copies": "8", "ref": "refs/heads/master", "path": "tutor/tutor.tex", "mode": "40960", "symlink_target": "../main.tex", "license": "mit", "language": [{"name": "Makefile", "bytes": "600"}, {"name": "Shell", "bytes": "401"}, {"name": "TeX", "bytes": "12723"}]}}
This loading error is circumvented when using this loading method:
ds = load_dataset('togethercomputer/RedPajama-Data-1T', 'github',cache_dir="/path_to_folder_with_jsonl",streaming=True)['train']
but for our particular use case we need to use the load_dataset('json',..) approach.
Would it be possible to clean the dataset to make all entries with consistent schema?
Related issue: #6162
Hi @Rita , thanks for bringing this up!
We are looking into it. Did you observe this for other splits as well or is it limited to the github split?
@mauriceweber sorry for the delay, i only tested gihub split
Ok, thanks! I noticed this is also an issue for the books split, since there are multiple sources (PG-19 and books3 from the pile) and they have different metadata fields.