Datasets:
File size: 3,657 Bytes
aee48fb 27c8bde aee48fb 27c8bde aee48fb 27c8bde aee48fb 27c8bde 64753a7 43dd11c 27c8bde 43dd11c 27c8bde 43dd11c 27c8bde 6184ad7 27c8bde 6184ad7 27c8bde 43dd11c 27c8bde 6cc8629 1829067 6cc8629 27c8bde 43dd11c 27c8bde f7d1f49 27c8bde b205b61 2420033 b205b61 27c8bde 43dd11c 27c8bde 5922f94 32037de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
dataset_info:
features:
- name: image
dtype: image
- name: caption
list: string
- name: sentids
list: string
- name: split
dtype: string
- name: img_id
dtype: string
- name: filename
dtype: string
splits:
- name: train
num_bytes: 4044387988
num_examples: 29000
- name: test
num_bytes: 142155397
num_examples: 1000
- name: validation
num_bytes: 140557396.192
num_examples: 1014
download_size: 4306311970
dataset_size: 4327100781.192
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
task_categories:
- text-generation
- image-to-text
- text-to-image
language:
- pt
pretty_name: Flickr30K Portuguese Translated
size_categories:
- 10K<n<100K
---
# 🎉 Flickr30K Portuguese Translated
## 💾 Dataset Summary
Flickr30K Portuguese Translated consists of 31,014 images, each accompanied by five descriptive captions that have been
generated by human annotators for every individual image. The original English captions were rendered into Portuguese
through the utilization of the Google Translator API.
The dataset is one of the results of work available at: https://github.com/laicsiifes/ved-transformer-caption-ptbr.
## 🧑💻 Hot to Get Started with the Dataset
```python
from datasets import load_dataset
dataset = load_dataset('laicsiifes/flickr30k-pt-br')
```
## ✍️ Languages
The images descriptions in the dataset are in Portuguese.
## 🧱 Dataset Structure
### 📝 Data Instances
An example looks like below:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333>,
'caption':
[
'Um cachorro preto carrega um brinquedo verde na boca enquanto caminha pela grama.',
'Um cachorro preto molhado carrega um brinquedo verde pela grama.',
'Um cachorro preto carregando algo pela grama.',
'Um cachorro na grama com um item azul na boca.',
'Um cachorro preto tem um brinquedo azul na boca.'
],
'sentids': ['450', '451', '452', '453', '454'],
'split': 'train',
'img_id': '90',
'filename': '1026685415.jpg'
}
```
### 🗃️ Data Fields
The data instances have the following fields:
- `image`: a `PIL.Image.Image` object containing the image.
- `caption`: a `list` of `str` containing the 5 captions related to the image.
- `sentids`: a `list` of `str` containing the 5 ordered identification numbers related to each caption.
- `split`: a `str` containing the data split. It stores the texts: `train`, `val` or `test`.
- `img_id`: a `str` containing the image identification number.
- `filename`: a `str` containing the name of the image file.
### ✂️ Data Splits
The dataset is partitioned using the Karpathy splitting appoach for Image Captioning
([Karpathy and Fei-Fei, 2015](https://arxiv.org/pdf/1412.2306)).
|Split|Samples|Average Caption Length (Words)|
|:-----------:|:-----:|:--------:|
|Train|29,000|12.1 ± 5.1|
|Validation|1,014|12.3 ± 5.3|
|Test|1,000|12.2 ± 5.4|
|Total|31,014|12.1 ± 5.2|
## 📋 BibTeX entry and citation info
```bibtex
@inproceedings{bromonschenkel2024comparative,
title = "A Comparative Evaluation of Transformer-Based Vision
Encoder-Decoder Models for Brazilian Portuguese Image Captioning",
author = "Bromonschenkel, Gabriel and Oliveira, Hil{\'a}rio and
Paix{\~a}o, Thiago M.",
booktitle = "Proceedings...",
organization = "Conference on Graphics, Patterns and Images, 37. (SIBGRAPI)",
year = "2024",
}
``` |