Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'task_type', 'questions'}) and 1 missing columns ({'annotations'}).

This happened while the json dataset builder was generating data using

hf://datasets/dinhanhx/VQAv2-vi/en/v2_OpenEnded_mscoco_train2014_questions.json (at revision f147eec95855fdcd15dde8989cec374bbbee8450)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              info: struct<description: string, url: string, version: string, year: int64, contributor: string, date_created: timestamp[s]>
                child 0, description: string
                child 1, url: string
                child 2, version: string
                child 3, year: int64
                child 4, contributor: string
                child 5, date_created: timestamp[s]
              task_type: string
              data_type: string
              license: struct<url: string, name: string>
                child 0, url: string
                child 1, name: string
              data_subtype: string
              questions: list<item: struct<image_id: int64, question: string, question_id: int64>>
                child 0, item: struct<image_id: int64, question: string, question_id: int64>
                    child 0, image_id: int64
                    child 1, question: string
                    child 2, question_id: int64
              to
              {'info': {'description': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'version': Value(dtype='string', id=None), 'year': Value(dtype='int64', id=None), 'contributor': Value(dtype='string', id=None), 'date_created': Value(dtype='timestamp[s]', id=None)}, 'license': {'url': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}, 'data_subtype': Value(dtype='string', id=None), 'annotations': [{'question_type': Value(dtype='string', id=None), 'multiple_choice_answer': Value(dtype='string', id=None), 'image_id': Value(dtype='int64', id=None), 'answer_type': Value(dtype='string', id=None), 'question_id': Value(dtype='int64', id=None)}], 'data_type': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'task_type', 'questions'}) and 1 missing columns ({'annotations'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/dinhanhx/VQAv2-vi/en/v2_OpenEnded_mscoco_train2014_questions.json (at revision f147eec95855fdcd15dde8989cec374bbbee8450)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

info
dict
license
dict
data_subtype
string
annotations
list
data_type
string
task_type
string
questions
list
{"description":"This is v2.0 of the VQA dataset.","url":"http://visualqa.org","version":"2.0","year"(...TRUNCATED)
{"url":"http://creativecommons.org/licenses/by/4.0/","name":"Creative Commons Attribution 4.0 Intern(...TRUNCATED)
train2014
[{"question_type":"what is this","multiple_choice_answer":"net","image_id":458752,"answer_type":"oth(...TRUNCATED)
mscoco
null
null
{"description":"This is v2.0 of the VQA dataset.","url":"http://visualqa.org","version":"2.0","year"(...TRUNCATED)
{"url":"http://creativecommons.org/licenses/by/4.0/","name":"Creative Commons Attribution 4.0 Intern(...TRUNCATED)
train2014
null
mscoco
Open-Ended
[{"image_id":458752,"question":"What is this photo taken looking through?","question_id":458752000},(...TRUNCATED)
{"description":"This is v2.0 of the VQA dataset.","url":"http://visualqa.org","version":"2.0","year"(...TRUNCATED)
{"url":"http://creativecommons.org/licenses/by/4.0/","name":"Creative Commons Attribution 4.0 Intern(...TRUNCATED)
train2014
[{"question_type":"what is this","multiple_choice_answer":"net","answers":[{"answer":"net","answer_c(...TRUNCATED)
mscoco
null
null

VQAv2 in Vietnamese

This is Google-translated version of VQAv2 in Vietnamese. The process of building Vietnamese version as follows:

  • In en/ folder,
    • Download v2_OpenEnded_mscoco_train2014_questions.json and v2_mscoco_train2014_annotations.json from VQAv2.
    • Remove key answers of key annotations from v2_mscoco_train2014_annotations.json. I shall use key multiple_choice_answer of key annotations only. Let call the new file v2_OpenEnded_mscoco_train2014_answers.json
    • By using set data structure, I generate question_list.txt and answer_list.txt of unique text. There are 152050 unique questions and 22531 unique answers from 443757 image-question-answer triplets.
  • In vi/ folder,
    • By translating two en/.txt files, I generate answer_list.jsonl and question_list.jsonl. In each of entry of each file, the key is the original english text, the value is the translated text in vietnamese.

To load Vietnamese version in your code, you need original English version. Then just use English text as key to retrieve Vietnamese value from answer_list.jsonl and question_list. I provide both English and Vietnamese version.

Please refer to this code to apply translation.

Downloads last month
94