OpenCAI / README.md
Norquinal's picture
Update README.md
c980a7c verified
|
raw
history blame
3.82 kB
metadata
license: cc-by-nc-4.0
language:
  - en
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files: discord_logs.json
  - config_name: unsquashed
    data_files: discord_logs_unsquashed.json
  - config_name: two_users
    data_files: discord_logs_two_users.json
  - config_name: split_threads
    data_files: discord_logs_split_threads.json
  - config_name: anonymized
    data_files: discord_logs_anonymized.json

This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.

The original dataset consisted of ~90K samples. Light filtering striped that down to ~18K samples. Stricter filtering striped it down to ~8k samples. Strictest filtering striped it down to ~2k samples.

Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.

In here are several files:

  • discord_logs_unsquashed.json - The original dataset without squashing consecutive messages from the same author. All subsequent files are squashed.
  • discord_logs.json - The original dataset and default option.
  • discord_logs_two_users.json - The original dataset limited to conversations to those with only two users. I recommend using this file.
  • discord_logs_split_threads.json - The original dataset with threads split by timestamp like channels.
  • discord_logs_anonymized.json - The original dataset with usernames replaced with randomized substitutes.
  • 125_tokens_6_messages.json (Strictest) - Original dataset filtered for an average and median token length of 125 and a minimum conversation length of 6 messages.
  • 80_tokens_6_messages.json (Stricter) - Original dataset filtered for an average and median token length of 80 tokens and a minimum conversation length of 6 messages. The latter contains the former, so use one or the other, but not both.
  • 80_tokens_3_messages.json (Light) - Original dataset filtered for an average and median token length of 80 tokens and a minimum conversation length of 3 messages. The latter contains the former, so use one or the other, but not both.
  • opencai_rp.json - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by gpt-3.5-turbo-16k.
  • opencai_rp_metharme.json - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.

Explanation of Properties:

  • timestamp: Date of the interaction in YYYY-MM-DD format
  • type: Whether the interaction originated from a channel (GuildTextChat) or thread (GuildPublicThread). Threads were parsed differently than channels and use a static timestamp of 1776-07-04 to differentiate them.
  • token_length: The total token length of all messages in the conversation, calculated using tiktoken.
  • average_token_length: The average token length of all messages in the conversation.
  • median_token_length: The median token length of all messages in the conversation.
  • conversations: The conversation between the users in the chat. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing two key-value pairs: message, referring to the utterance itself, and author referring to their discord username.