ucsahin's picture
Update README.md
f59e589 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: prompt
      dtype: string
    - name: label
      dtype: string
  splits:
    - name: train
      num_bytes: 23257283614.36
      num_examples: 153128
  download_size: 23241036646
  dataset_size: 23257283614.36
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - object-detection
language:
  - tr

This dataset is combined and deduplicated version of coco-2014 and coco-2017 datasets for object detection. The labels are in Turkish and the dataset is in an instruction-tuning format with separate columns for prompts and completion labels.

For the bounding boxes, a similar annotation scheme to that of PaliGemma annotation is used. That is,

The bounding box coordinates are in the form of special <loc[value]> tokens, where value is a number that represents a normalized coordinate. Each detection is represented by four location coordinates in the order x_min(left), y_min(top), x_max(right), y_max(bottom), followed by the label that was detected in that box. To convert values to coordinates, you first need to divide the numbers by 1024, then multiply y by the image height and x by its width. This will give you the coordinates of the bounding boxes, relative to the original image size.