frascuchon's picture
frascuchon HF staff
Upload README.md with huggingface_hub
b7d0e2c
|
raw
history blame
13.5 kB
metadata
size_categories: 1K<n<10K
tags:
  - rlfh
  - argilla
  - human-feedback

Dataset Card for oasst_response_quality

This dataset has been created with Argilla.

As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.

Dataset Description

Dataset Summary

This dataset contains:

  • A dataset configuration file conforming to the Argilla dataset format named argilla.yaml. This configuration file will be used to configure the dataset when using the FeedbackDataset.from_huggingface method in Argilla.

  • Dataset records in a format compatible with HuggingFace datasets. These records will be loaded automatically when using FeedbackDataset.from_huggingface and can be loaded independently using the datasets library via load_dataset.

  • The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.

Load with Argilla

To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:

import argilla as rg

ds = rg.FeedbackDataset.from_huggingface("frascuchon/oasst_response_quality")

Load with datasets

To load this dataset with datasets, you'll just need to install datasets as pip install datasets --upgrade and then use the following code:

from datasets import load_dataset

ds = load_dataset("frascuchon/oasst_response_quality")

Supported Tasks and Leaderboards

This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.

There are no leaderboards associated with this dataset.

Languages

[More Information Needed]

Dataset Structure

Data in Argilla

The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.

The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.

Field Name Title Type Required Markdown
prompt Prompt FieldTypes.text True True
response Response FieldTypes.text True True

The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.

Question Name Title Type Required Description Values/Labels
relevant Is the response relevant for the given prompt? QuestionTypes.label_selection True N/A ['Yes', 'No']
content_class Does the response include any of the following? QuestionTypes.multi_label_selection False N/A ['hate', 'inappropriate', 'not_english', 'pii', 'sexual', 'untruthful', 'violent']
rating Rate the quality of the response: QuestionTypes.rating True Rate the quality of the response based on its truthfulness and helpfulness. 1 is very bad and 10 is very good. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
corrected-text Provide a correction to the response: QuestionTypes.text False If the rating provided is below 4, please write a new version of the response. N/A

The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".

The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the metadata_properties defined in the dataset configuration file in argilla.yaml.

Metadata Name Title Type Values Visible for Annotators

The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.

Data Instances

An example of a dataset instance in Argilla looks as follows:

{
    "external_id": null,
    "fields": {
        "prompt": "Can you explain contrastive learning in machine learning in simple terms for someone new to the field of ML?",
        "response": "Sure! Let\u0027s say you want to build a model which can distinguish between images of cats and dogs. You gather your dataset, consisting of many cat and dog pictures. Then you put them through a neural net of your choice, which produces some representation for each image, a sequence of numbers like [0.123, 0.045, 0.334, ...]. The problem is, if your model is unfamiliar with cat and dog images, these representations will be quite random. At one time a cat and a dog picture could have very similar representations (their numbers would be close to each other), while at others two cat images may be represented far apart. In simple terms, the model wouldn\u0027t be able to tell cats and dogs apart. This is where contrastive learning comes in.\n\nThe point of contrastive learning is to take pairs of samples (in this case images of cats and dogs), then train the model to \"pull\" representations of similar pairs (cat-cat or dog-dog) closer to each other and \"push\" representations of different pairs (cat-dog) apart. After doing this for a sufficient number of steps, your model will be able to produce unique, reliable representations for cats and dogs, in essence tell them apart.\n\nThis method is not limited to images, you can typically use it with any dataset that has similar and dissimilar data points."
    },
    "metadata": {},
    "responses": [],
    "suggestions": [],
    "vectors": {}
}

While the same record in HuggingFace datasets looks as follows:

{
    "content_class": [],
    "content_class-suggestion": null,
    "content_class-suggestion-metadata": {
        "agent": null,
        "score": null,
        "type": null
    },
    "corrected-text": [],
    "corrected-text-suggestion": null,
    "corrected-text-suggestion-metadata": {
        "agent": null,
        "score": null,
        "type": null
    },
    "external_id": null,
    "metadata": "{}",
    "prompt": "Can you explain contrastive learning in machine learning in simple terms for someone new to the field of ML?",
    "rating": [],
    "rating-suggestion": null,
    "rating-suggestion-metadata": {
        "agent": null,
        "score": null,
        "type": null
    },
    "relevant": [],
    "relevant-suggestion": null,
    "relevant-suggestion-metadata": {
        "agent": null,
        "score": null,
        "type": null
    },
    "response": "Sure! Let\u0027s say you want to build a model which can distinguish between images of cats and dogs. You gather your dataset, consisting of many cat and dog pictures. Then you put them through a neural net of your choice, which produces some representation for each image, a sequence of numbers like [0.123, 0.045, 0.334, ...]. The problem is, if your model is unfamiliar with cat and dog images, these representations will be quite random. At one time a cat and a dog picture could have very similar representations (their numbers would be close to each other), while at others two cat images may be represented far apart. In simple terms, the model wouldn\u0027t be able to tell cats and dogs apart. This is where contrastive learning comes in.\n\nThe point of contrastive learning is to take pairs of samples (in this case images of cats and dogs), then train the model to \"pull\" representations of similar pairs (cat-cat or dog-dog) closer to each other and \"push\" representations of different pairs (cat-dog) apart. After doing this for a sufficient number of steps, your model will be able to produce unique, reliable representations for cats and dogs, in essence tell them apart.\n\nThis method is not limited to images, you can typically use it with any dataset that has similar and dissimilar data points."
}

Data Fields

Among the dataset fields, we differentiate between the following:

  • Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.

    • prompt is of type FieldTypes.text.
    • response is of type FieldTypes.text.
  • Questions: These are the questions that will be asked to the annotators. They can be of different types, such as RatingQuestion, TextQuestion, LabelQuestion, MultiLabelQuestion, and RankingQuestion.

    • relevant is of type QuestionTypes.label_selection with the following allowed values ['Yes', 'No'].
    • (optional) content_class is of type QuestionTypes.multi_label_selection with the following allowed values ['hate', 'inappropriate', 'not_english', 'pii', 'sexual', 'untruthful', 'violent'].
    • rating is of type QuestionTypes.rating with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], and description "Rate the quality of the response based on its truthfulness and helpfulness. 1 is very bad and 10 is very good.".
    • (optional) corrected-text is of type QuestionTypes.text, and description "If the rating provided is below 4, please write a new version of the response.".
  • Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.

    • (optional) relevant-suggestion is of type QuestionTypes.label_selection with the following allowed values ['Yes', 'No'].
    • (optional) content_class-suggestion is of type QuestionTypes.multi_label_selection with the following allowed values ['hate', 'inappropriate', 'not_english', 'pii', 'sexual', 'untruthful', 'violent'].
    • (optional) rating-suggestion is of type QuestionTypes.rating with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
    • (optional) corrected-text-suggestion is of type QuestionTypes.text.

Additionally, we also have two more fields that are optional and are the following:

  • metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the metadata_properties defined in the dataset configuration file in argilla.yaml.
  • external_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.

Data Splits

The dataset contains a single split, which is train.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation guidelines

Answer the questions to assess the quality of the response given by the chat assistant.

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

[More Information Needed]