Datasets:
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 22394390 | |
num_examples: 67349 | |
- name: validation | |
num_bytes: 324252 | |
num_examples: 872 | |
download_size: 4390572 | |
dataset_size: 22718642 | |
task_categories: | |
- text-classification | |
language: | |
- en | |
# Dataset Card for "llama2-sst2-finetuning" | |
## Dataset Description | |
The Llama2-sst2-fine-tuning dataset is designed for supervised fine-tuning of the LLaMA V2 based on the GLUE SST2 for sentiment analysis classification task. | |
We provide two subsets: training and validation. | |
To ensure the effectiveness of fine-tuning, we convert the data into the prompt template for LLaMA V2 supervised fine-tuning, where the data will follow this format: | |
``` | |
<s>[INST] <<SYS>> | |
{System prompt} | |
<</SYS>> | |
{User prompt} [/INST] {Label} </s>. | |
``` | |
The feasibility of this dataset has been tested in supervised fine-tuning on the meta-llama/Llama-2-7b-hf model. | |
Note. For the sake of simplicity, we have retained only one new column of data ('text'). | |
## Other Useful Links | |
- [Get Llama 2 Prompt Format Right](https://www.reddit.com/r/LocalLLaMA/comments/155po2p/get_llama_2_prompt_format_right/) | |
- [Fine-Tune Your Own Llama 2 Model in a Colab Notebook](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) | |
- [Instruction fine-tuning Llama 2 with PEFT’s QLoRa method](https://medium.com/@ud.chandra/instruction-fine-tuning-llama-2-with-pefts-qlora-method-d6a801ebb19) | |
- [GLUE SST2 Dataset](https://www.tensorflow.org/datasets/catalog/glue#gluesst2) | |
<!--[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)--> |