Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:

Lots of empty strings

#5
by charles-godfrey - opened

I created a hf dataset like:

import datasets as hfds
# shf_data_cache_dir is my cache dir of choice
def wikitext_ds(split: str = 'test'):
    ds = hfds.load_dataset(
        path='wikitext',
        name='wikitext-103-v1',
        split=split,
        cache_dir=hf_data_cache_dir
    )
    return ds

and loaded it in a notebook like wtds = wikitext_ds(). Running a cell with

idx = ch.randint(len(wtds), size=(5,))
wtds[idx]

Results in a surprising number of empty string data points. E.g.:

{'text': [' = = = Home media = = = \n',
  '',
  '',
  ' Subsequently , it weakened and made landfall at Jupiter , Florida , early on September 4 with winds of 125 mph ( 201 km / h ) . The hurricane moved across the state , passing near Tampa before moving into Georgia and dissipating . In Florida , the strong winds of the cyclone blew buildings off their foundations , and numerous trees were prostrated in citrus groves . The Treasure Coast region received the most extensive destruction , and Stuart , Jupiter , and Fort Pierce were heavily damaged . Inland , the cyclone weakened rapidly but produced prodigious amounts of rain , causing a dam to collapse near Tampa . The storm caused $ 3 million in damage ( 1933 USD ) after damaging or destroying 6 @,@ 848 homes . \n',
  '']}

same issue......

Same issue

More than 50% of the rows in the train-*.parquet files are empty strings.

See the actual usage of wikitext here: https://huggingface.co/docs/transformers/perplexity. It is straightforward.

Sign up or log in to comment