The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 159, in compute compute_split_names_from_info_response( File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 131, in compute_split_names_from_info_response config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config) File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 567, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/tarfile.py", line 190, in nti s = nts(s, "ascii", "strict") File "/usr/local/lib/python3.9/tarfile.py", line 174, in nts return s.decode(encoding, errors) UnicodeDecodeError: 'ascii' codec can't decode byte 0x87 in position 1: ordinal not in range(128) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/tarfile.py", line 2588, in next tarinfo = self.tarinfo.fromtarfile(self) File "/usr/local/lib/python3.9/tarfile.py", line 1292, in fromtarfile obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) File "/usr/local/lib/python3.9/tarfile.py", line 1234, in frombuf chksum = nti(buf[148:156]) File "/usr/local/lib/python3.9/tarfile.py", line 193, in nti raise InvalidHeaderError("invalid header") tarfile.InvalidHeaderError: invalid header During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 86, in _split_generators first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 30, in _get_pipeline_from_tar for filename, f in tar_iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1577, in __iter__ for x in self.generator(*self.args, **self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1637, in _iter_from_urlpath yield from cls._iter_tar(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1588, in _iter_tar stream = tarfile.open(fileobj=f, mode="r|*") File "/usr/local/lib/python3.9/tarfile.py", line 1822, in open t = cls(name, filemode, stream, **kwargs) File "/usr/local/lib/python3.9/tarfile.py", line 1703, in __init__ self.firstmember = self.next() File "/usr/local/lib/python3.9/tarfile.py", line 2600, in next raise ReadError(str(e)) tarfile.ReadError: invalid header The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 75, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
data
If you are looking for our intermediate labeling version, please refer to mango-ttic/data-intermediate
Find more about us at mango.ttic.edu
Folder Structure
Each folder inside data
contains the cleaned up files used during LLM inference and results evaluations. Here is the tree structure from game data/night
.
data/night/
├── night.actions.json # list of mentioned actions
├── night.all2all.jsonl # all simple paths between any 2 locations
├── night.all_pairs.jsonl # all connectivity between any 2 locations
├── night.edges.json # list of all edges
├── night.locations.json # list of all locations
└── night.walkthrough # enriched walkthrough exported from Jericho simulator
Variations
70-step vs all-step version
In our paper, we benchmark using the first 70 steps of the walkthrough from each game. We also provide all-step versions of both data
and data-intermediate
collection.
70-step
data-70steps.tar.zst
: contains the first 70 steps of each walkthrough. If the complete walkthrough is shorter than 70 steps, then all steps are used.All-step
data.tar.zst
: contains all steps of each walkthrough.
Word-only & Word+ID
Word-only
data.tar.zst
: Nodes are annotated by additional descriptive text to distinguish different locations with similar names.Word + Object ID
data-objid.tar.zst
: variation of the word-only version, where nodes are labeled using minimaly fixed names with object id from Jericho simulator.Word + Random ID
data-randid.tar.zst
: variation of the Jericho ID version, where the Jericho object id replaced with randomly generated integer.
We primarily rely on the word-only version as benchmark, yet providing word+ID version for diverse benchmark settings.
How to use
We use data.tar.zst
as an example here.
1. download from Huggingface
by directly download
You can selectively download certain variation of your choice.
by git
Make sure you have git-lfs installed
git lfs install
git clone https://huggingface.co/datasets/mango-ttic/data
# or, use hf-mirror if your connection to huggingface.co is slow
# git clone https://hf-mirror.com/datasets/mango-ttic/data
If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/mango-ttic/data
# or, use hf-mirror if your connection to huggingface.co is slow
# GIT_LFS_SKIP_SMUDGE=1 git clone https://hf-mirror.com/datasets/mango-ttic/data
2. decompress
Because some json files are huge, we use tar.zst to package the data efficiently.
silently decompress
tar -I 'zstd -d' -xf data.tar.zst
or, verbosely decompress
zstd -d -c data.tar.zst | tar -xvf -
- Downloads last month
- 32