Datasets:
language:
- nl
license: eupl-1.1
size_categories:
- 10K<n<100K
tags:
- documents
- fine-tuning
dataset_info:
features:
- name: prompt_id
dtype: int64
- name: message
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 9011452
num_examples: 9900
- name: test
num_bytes: 998068
num_examples: 1100
- name: val
num_bytes: 1000675
num_examples: 1100
- name: discard
num_bytes: 7897005
num_examples: 8718
download_size: 5846654
dataset_size: 18907200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
- split: discard
path: data/discard-*
This dataset is a modified version of the AmsterdamDocClassificationDataset. The original dataset consists of Dutch Raadsinformatie documents from the Municipality of Amsterdam, which were published in accordance with the Open Government Act (Woo). In this modified version, the documents are truncated to the first 200 tokens each. This dataset is used to fine-tune large language models (LLMs) for the Assessing LLMs for Document Classification project. The documents are formatted into a zero-shot prompt and then turned into conversations, where the ideal response of the model is the prediction (class) formatted into JSON format.
Specifics:
- Truncation: first 200 tokens of each document. Docs are tokenized using the Llama tokenizer.
- Data split:
- test set: first 100 docs of each class (in total 1100 docs)
- train set: remaining docs, with max of 1500 docs per class (11000 docs)
- train set: 90% of train set is used for fine-tuning model (9900 docs)
- val set: 10% of train set is used for evaluating the loss during training (1100 docs)
Data sources:
This dataset is part of [inseart thesis info] in collaboration with Amsterdam Intelligence for the City of Amsterdam.