Datasets:
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for "guardian_authorship"
Dataset Summary
A dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013. 1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross_topic_1 => row 1:P S U&W ). 2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross_genre_1 => row 1:B P S&U&W).
3- The same-topic/genre scenario is created by grouping all the datasts as follows. For ex., to use same_topic and split the data 60-40 use: train_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[:60%]+validation[:60%]+test[:60%]') tests_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[-40%:]+validation[-40%:]+test[-40%:]')
IMPORTANT: train+validation+test[:60%] will generate the wrong splits because the data is imbalanced
- See https://huggingface.co/docs/datasets/splits.html for detailed/more examples
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
cross_genre_1
- Size of downloaded dataset files: 3.10 MB
- Size of the generated dataset: 2.74 MB
- Total amount of disk used: 5.84 MB
An example of 'train' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 4
}
cross_genre_2
- Size of downloaded dataset files: 3.10 MB
- Size of the generated dataset: 2.74 MB
- Total amount of disk used: 5.84 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 1
}
cross_genre_3
- Size of downloaded dataset files: 3.10 MB
- Size of the generated dataset: 2.74 MB
- Total amount of disk used: 5.84 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 2
}
cross_genre_4
- Size of downloaded dataset files: 3.10 MB
- Size of the generated dataset: 2.74 MB
- Total amount of disk used: 5.84 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 3
}
cross_topic_1
- Size of downloaded dataset files: 3.10 MB
- Size of the generated dataset: 2.34 MB
- Total amount of disk used: 5.43 MB
An example of 'validation' looks as follows.
{
"article": "File 1a\n",
"author": 0,
"topic": 1
}
Data Fields
The data fields are the same among all splits.
cross_genre_1
author
: a classification label, with possible values includingcatherinebennett
(0),georgemonbiot
(1),hugoyoung
(2),jonathanfreedland
(3),martinkettle
(4).topic
: a classification label, with possible values includingPolitics
(0),Society
(1),UK
(2),World
(3),Books
(4).article
: astring
feature.
cross_genre_2
author
: a classification label, with possible values includingcatherinebennett
(0),georgemonbiot
(1),hugoyoung
(2),jonathanfreedland
(3),martinkettle
(4).topic
: a classification label, with possible values includingPolitics
(0),Society
(1),UK
(2),World
(3),Books
(4).article
: astring
feature.
cross_genre_3
author
: a classification label, with possible values includingcatherinebennett
(0),georgemonbiot
(1),hugoyoung
(2),jonathanfreedland
(3),martinkettle
(4).topic
: a classification label, with possible values includingPolitics
(0),Society
(1),UK
(2),World
(3),Books
(4).article
: astring
feature.
cross_genre_4
author
: a classification label, with possible values includingcatherinebennett
(0),georgemonbiot
(1),hugoyoung
(2),jonathanfreedland
(3),martinkettle
(4).topic
: a classification label, with possible values includingPolitics
(0),Society
(1),UK
(2),World
(3),Books
(4).article
: astring
feature.
cross_topic_1
author
: a classification label, with possible values includingcatherinebennett
(0),georgemonbiot
(1),hugoyoung
(2),jonathanfreedland
(3),martinkettle
(4).topic
: a classification label, with possible values includingPolitics
(0),Society
(1),UK
(2),World
(3),Books
(4).article
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
cross_genre_1 | 63 | 112 | 269 |
cross_genre_2 | 63 | 62 | 319 |
cross_genre_3 | 63 | 90 | 291 |
cross_genre_4 | 63 | 117 | 264 |
cross_topic_1 | 112 | 62 | 207 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@article{article,
author = {Stamatatos, Efstathios},
year = {2013},
month = {01},
pages = {421-439},
title = {On the robustness of authorship attribution based on character n-gram features},
volume = {21},
journal = {Journal of Law and Policy}
}
@inproceedings{stamatatos2017authorship,
title={Authorship attribution using text distortion},
author={Stamatatos, Efstathios},
booktitle={Proc. of the 15th Conf. of the European Chapter of the Association for Computational Linguistics},
volume={1}
pages={1138--1149},
year={2017}
}
Contributions
Thanks to @thomwolf, @eltoto1219, @malikaltakrori for adding this dataset.
- Downloads last month
- 2,347