Datasets:

Formats:
json
Languages:
English
ArXiv:
Tags:
GUI
License:
The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code:   InfoError
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=100.0)"), '(Request ID: f587f0c6-b431-44b8-8107-b67a9fe9537f)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 217, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 478, in get_dataset_config_info
                  builder = load_dataset_builder(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2259, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1198, in get_module
                  hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2346, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 66, in send
                  return super().send(request, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=100.0)"), '(Request ID: f587f0c6-b431-44b8-8107-b67a9fe9537f)')

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for GUI Odyssey

Introduction

GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.

Data Structure

Data Fields

Each field of annotation is as follows:

  • episode_id(str): the unique identifier of this episode.
  • device_info(dict): the detailed information of the virtual device from which the episode was collected.
    • product(str): the product name of the emulator.
    • release_version(str): the Android API level of the emulator.
    • sdk_version(str): the version of the software development kit used for the emulator.
    • h(int): the height of the device screen.
    • w(int): the width of the device screen.
    • device_name(str): the name of the virtual device, one of Pixel Fold, Pixel Tablet, Pixel 8 Pro, Pixel 7 Pro, Medium Phone, Small Phone
  • task_info(dict): the detailed information of the task from which the episode was collected.
    • category(str): the category of this task, one of Multi_Apps, Web_Shopping, General_Tool, Information_Management, Media_Entertainment, Social_Sharing
    • app(list[str]): the Apps used for this task.
    • meta_task(str): the template for this task, e.g., "Search for the next {} and set a reminder."
    • task(str): the specific task created by filling in the meta-task, e.g., "Search for the next New York Fashion Week and set a reminder."
    • instruction(str): the detailed and rephrased version of the task, including specific tools or applications, e.g., "Utilize DuckDuckgo to find the dates for the next New York Fashion Week and then use TickTick to set a reminder for the event."
  • step_length(int): the total number of steps in this episode.
  • steps(list[dict]): each individual step of this episode. Including the following fields:
    • step(int): each step within the episode is identified by a zero-indexed step number, indicating its position in sequence within the episode. For example, if the step is 1, it corresponds to the second step of the episode.
    • screenshot(str): the current screenshot of this step
    • action(str): the corresponding action of this step, one of CLICK, SCROLL, LONG_PRESS, TYPE, COMPLETE, IMPOSSIBLE, HOME, BACK
    • info(Union[str, list[list]]): provides specific details required to perform the action specified in the action field. Note that all the coordinates are normalized to the range of [0, 1000].
      • if action is CLICK, info contains the coordinates(x, y) to click on or one of the special keys KEY_HOME, KEY_BACK, KEY_RECENT.
      • if action is LONG_PRESS, info contains the coordinates(x, y) for the long press.
      • if action is SCROLL, info contains the starting(x1, y1) and ending(x2, y2) coordinates of the scroll action.
      • if action is any other value, info is empty ("").
    • ps(str): provides additional details or context depending on the value of the action field.
      • if action is COMPLETE or IMPOSSIBLE: may contain any additional information from the annotator about why the task is complete or why it was impossible to complete.
      • if action is SCROLL: contains the complete trajectory of the scroll action.

Data Splits

we can evaluate the in- and out-of-domain performance of Agent by splitting GUI Odyssey in two ways:

  • random_split: randomly splitting the dataset into the training and test set with the ratio of $3:1$,

and organizing with the training set covering a portion of apps/tasks/devices and the test set covering the remaining apps/tasks/devices:

  • task_split: proportionally samples meta-tasks from six categories. The tasks in the test set differ significantly from those in the training set. This partitioning method allows for a robust assessment of an agent's generalization capabilities across diverse tasks.

  • device_split: selects episodes annotated on the Fold Phone, which differs significantly from other devices such as smartphones and tablets, as the test set.

  • app_split: splits based on the apps. The apps in the test set differ significantly from those in the training set.

Each of the four classifications mentioned above has a corresponding JSON file, and the fields in each JSON file are as follows:

  • train(list[str]): the list of annotation filenames for the training set, which are equivalent to the episode_id.
  • test(list[str]): the list of annotation filenames for the test set, which are equivalent to the episode_id.

Licensing Information

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Disclaimer

This dataset is intended primarily for research purposes. We strongly oppose any harmful use of the data or technology.

Citation

@misc{lu2024gui,
      title={GUI Odyssey: A Comprehensive Dataset for Cross-App GUI Navigation on Mobile Devices}, 
      author={Quanfeng Lu and Wenqi Shao and Zitao Liu and Fanqing Meng and Boxuan Li and Botong Chen and Siyuan Huang and Kaipeng Zhang and Yu Qiao and Ping Luo},
      year={2024},
      eprint={2406.08451},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Downloads last month
9
Edit dataset card

Models trained or fine-tuned on OpenGVLab/GUI-Odyssey