nghiahieunguyen commited on
Commit
cc85c24
1 Parent(s): edc93ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -13,9 +13,14 @@ OpenViVQA: Open-domain Vietnamese Visual Question Answering
13
 
14
  ![examples](data_examples.png)
15
 
16
- The OpenViVQA dataset contains <b>11,000+</b> images with <b>37,000+</b> question-answer pairs which introduces the Text-based Open-ended Visual Question Answering in Vietnamese. This dataset is publicly available to research community in the VLSP 2023 - ViVRC shared task challenge. You can access the dataset as well as submit your results to evaluate on the private test set on the [Codalab](https://codalab.lisn.upsaclay.fr/competitions/15212#participate) evaluation system.
17
 
18
- If you mention or use any information of our dataset, please cite to our paper:
 
 
 
 
 
19
  ```
20
  @article{NGUYEN2023101868,
21
  title = {OpenViVQA: Task, dataset, and multimodal fusion models for visual question answering in Vietnamese},
 
13
 
14
  ![examples](data_examples.png)
15
 
16
+ The OpenViVQA dataset contains <b>11,000+</b> images with <b>37,000+</b> question-answer pairs which introduces the Text-based Open-ended Visual Question Answering in Vietnamese. This dataset is publicly available to the research community in the VLSP 2023 - ViVRC shared task challenge. You can access the dataset as well as submit your results to evaluate on the private test set on the [Codalab](https://codalab.lisn.upsaclay.fr/competitions/15212#participate) evaluation system.
17
 
18
+ Link to the OpenViVQA dataset:
19
+ - [Train images](train-images.zip) + [train annotations](vlsp2023_train_data.json).
20
+ - [Dev images](dev-images.zip) + [dev annotations](vlsp2023_dev_data.json).
21
+ - [Test images](test-images.zip) + [test annotations (without answers)](vlsp2023_test_data.json).
22
+
23
+ If you mention or use any information from our dataset, please cite our paper:
24
  ```
25
  @article{NGUYEN2023101868,
26
  title = {OpenViVQA: Task, dataset, and multimodal fusion models for visual question answering in Vietnamese},