Dataset Viewer issue - Heyo any ideas how to get a good viewer live? Couldnt see this issue before hitting public button
#1
by
Smooke
- opened
The dataset viewer is not working.
Error details:
Error code: FeaturesError
Exception: ParserError
Message: Error tokenizing data. C error: Expected 9 fields in line 5, saw 12
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 176, in compute_first_rows_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2211, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1235, in _head
return _examples_to_batch(list(self.take(n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1384, in __iter__
for key, example in ex_iterable:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1040, in __iter__
yield from islice(self.ex_iterable, self.n)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
for key, pa_table in self.generate_tables_fn(**self.kwargs):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 187, in _generate_tables
for batch_idx, df in enumerate(csv_file_reader):
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1624, in __next__
return self.get_chunk()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1733, in get_chunk
return self.read(nrows=size)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1704, in read
) = self._engine.read( # type: ignore[attr-defined]
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
File "pandas/_libs/parsers.pyx", line 826, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 875, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 861, in pandas._libs.parsers.TextReader._check_tokenize_status
File "pandas/_libs/parsers.pyx", line 2029, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 9 fields in line 5, saw 12
Should we split up the dataset into smaller files?
Looking at it.
The error seems to say that one of the rows has 12 columns instead of 9
pandas.errors.ParserError: Error tokenizing data. C error: Expected 9 fields in line 5, saw 12
I also tried to run xsv
commands on the CSV:
$ xsv count updatedCompanyNews.csv
CSV error: record 1 (line: 2, byte: 75): found record with 9 fields, but the previous record has 7 fields
I think it's a matter of enclosing the strings with "
, because they can contain commas.
$ head updatedCompanyNews.csv -n2
companyName, companyUrl, published_at, url, title, main_image, description
01Synergy, https://hackernoon.com/company/01synergy, 2023-05-16 02:09:00, https://www.businesswire.com/news/home/20230515005855/en/onsemi-and-Sineng-Electric-Spearhead-the-Development-of-Sustainable-Energy-Applications/, onsemi and Sineng Electric Spearhead the Development of Sustainable Energy Applications, https://firebasestorage.googleapis.com/v0/b/hackernoon-app.appspot.com/o/images%2Fimageedit_25_7084755369.gif?alt=media&token=ca7527b0-a214-46d4-af72-1062b3df1458, (Nasdaq: ON)\, a leader in intelligent power and sensing technologies, today announced that Sineng Electric will integrate onsemi EliteSiC silic
Also note that you might want to remove the space after the delimiters, ie: 01Synergy, https
-> 01Synergy,https
Thanks @severo for details. I think @fabian339 got it! Our dataset has viewer with 68, 952 paginations!
Smooke
changed discussion status to
closed
very nice!