TOS_DatasetV3 / README.md
CodeHima's picture
Update README.md
f0ee98f verified
|
raw
history blame
2.95 kB
metadata
dataset_info:
  features:
    - name: sentence
      dtype: string
    - name: unfairness_level
      dtype: string
    - name: sentence_length
      dtype: int64
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 1348431
      num_examples: 7946
    - name: validation
      num_bytes: 168316
      num_examples: 1050
    - name: test
      num_bytes: 172556
      num_examples: 1045
  download_size: 794924
  dataset_size: 1689303
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

TOS_DatasetV3

Dataset Description

TOS_DatasetV3 is a dataset designed for analyzing the unfairness of terms of service (ToS) clauses. It includes sentences from various terms of service agreements categorized into three unfairness levels: clearly_fair, potentially_unfair, and clearly_unfair. This dataset aims to aid in the development of models that can assess the fairness of legal documents.

Dataset Structure

The dataset consists of the following columns:

  • sentence: A string representing a sentence from a terms of service document.
  • unfairness_level: A string indicating the unfairness classification of the sentence. The possible values are:
    • clearly_fair
    • potentially_unfair
    • clearly_unfair

Features

  • Features:
    • name: sentence dtype: string
    • name: unfairness_level dtype: string
    • name: sentence_length dtype: int64
    • name: index_level_0 dtype: int64

Splits

The dataset is divided into three splits:

  • Train: Used for training models (7946 examples).
  • Validation: Used for validating model performance during training (1050 examples).
  • Test: Used for testing model performance after training (1045 examples).

Example

Here's a sample from the dataset:

sentence unfairness_level
"You agree to the terms." clearly_fair
"We reserve the right to change." potentially_unfair
"No refunds will be issued." clearly_unfair

Usage

This dataset can be used for various natural language processing tasks, particularly in the context of legal text analysis, fairness assessment, and model training for detecting unfair terms in contracts.

How to Load the Dataset

You can load the dataset using the datasets library from Hugging Face:

from datasets import load_dataset

dataset = load_dataset("TOS_DatasetV3")

License

This dataset is licensed under the MIT License. Please see the LICENSE file for more details.

Citation

If you use this dataset in your research, please cite it as follows:

@dataset{TOS_DatasetV3,
  author = {Himanshu Mohanty},
  title = {TOS_DatasetV3},
  year = {2024},
  url = {https://huggingface.co/datasets/TOS_DatasetV3}
}