datasets: - OpenAssistant/oasst1 pipeline_tag: text-generation license: apache-2.0
π Falcon-7b-chat-oasst1
Falcon-7b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning Falcon-7B on the OpenAssistant/oasst1 dataset. This repo only includes the LoRA adapters from fine-tuning with π€'s peft package.
Model Summary
- Model Type: Causal decoder-only
- Language(s): English
- Base Model: Falcon-7B (License: Apache 2.0)
- Dataset: OpenAssistant/oasst1 (License: Apache 2.0)
- License(s): Apache 2.0 inherited from "Base Model" and "Dataset"
Model Details
The model was fine-tuned in 8-bit precision using π€ peft
adapters, transformers
, and bitsandbytes
. Training relied on a method called "Low Rank Adapters" (LoRA), specifically the QLoRA variant.
Model Date
July 3, 2023
Quick Start
To prompt the chat model, use the following format:
<human>: [Instruction]
<bot>: