Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
GihhArwtw commited on
Commit
4c6301e
1 Parent(s): 60efda2

[MAJOR] README update.

Browse files
Files changed (1) hide show
  1. README.md +22 -2
README.md CHANGED
@@ -8,7 +8,7 @@ This is the dataset repository of `OpenDV-YouTube` language annotations, includi
8
 
9
  ## Usage
10
 
11
- To use the annotations, you need to first download and prepare the data as instructed in <a href="https://github.com/OpenDriveLab/DriveAGI/tree/main/opendv" target="_blank">OpenDV-YouTube</a>.
12
 
13
  You can use the following code to load in annotations respectively.
14
 
@@ -27,7 +27,7 @@ val_annos = json.load(open("10hz_YouTube_val.json", "r"))
27
 
28
  Annotations will be loaded in `full_annos` as a list where each element contains annotations for one video clip. All elements in the list are dictionaries of the following structure.
29
 
30
- ```
31
  {
32
  "cmd": <int> -- command, i.e. the command of the ego vehicle in the video clip.
33
  "blip": <str> -- context, i.e. the BLIP description of the center frame in the video clip.
@@ -35,4 +35,24 @@ Annotations will be loaded in `full_annos` as a list where each element contains
35
  "first_frame": <str> -- the filename of the first frame in the clip. Note that this file is included in the video clip.
36
  "last_frame": <str> -- the filename of the last frame in the clip. Note that this file is included in the video clip.
37
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ```
 
8
 
9
  ## Usage
10
 
11
+ To use the annotations, you need to first download and prepare the data as instructed in <a href="https://github.com/OpenDriveLab/DriveAGI/tree/main/opendv" target="_blank">OpenDV-YouTube</a>. **Note that we recommend to process the dataset in `Linux` environment since `Windows` may have issues with the file paths.**
12
 
13
  You can use the following code to load in annotations respectively.
14
 
 
27
 
28
  Annotations will be loaded in `full_annos` as a list where each element contains annotations for one video clip. All elements in the list are dictionaries of the following structure.
29
 
30
+ ```python
31
  {
32
  "cmd": <int> -- command, i.e. the command of the ego vehicle in the video clip.
33
  "blip": <str> -- context, i.e. the BLIP description of the center frame in the video clip.
 
35
  "first_frame": <str> -- the filename of the first frame in the clip. Note that this file is included in the video clip.
36
  "last_frame": <str> -- the filename of the last frame in the clip. Note that this file is included in the video clip.
37
  }
38
+ ```
39
+
40
+ The command, *i.e.* the `cmd` field, can be converted to natural language using the `map_category_to_caption` function. You may refer to [cmd2caption.py](https://github.com/OpenDriveLab/DriveAGI/blob/main/opendv/utils/cmd2caption.py#L158) for details.
41
+
42
+ The context, *i.e.* the `blip` field, is the description of the **center frame** in the video generated by `BLIP2`.
43
+
44
+
45
+ ## Citation
46
+
47
+ If you find our work helpful, please cite the following paper.
48
+
49
+ ```bibtex
50
+ @misc{yang2024genad,
51
+ title={Generalized Predictive Model for Autonomous Driving},
52
+ author={Jiazhi Yang and Shenyuan Gao and Yihang Qiu and Li Chen and Tianyu Li and Bo Dai and Kashyap Chitta and Penghao Wu and Jia Zeng and Ping Luo and Jun Zhang and Andreas Geiger and Yu Qiao and Hongyang Li},
53
+ year={2024},
54
+ eprint={2403.09630},
55
+ archivePrefix={arXiv},
56
+ primaryClass={cs.CV}
57
+ }
58
  ```