YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
datasets: - OpenAssistant/oasst1 pipeline_tag: text-generation license: apache-2.0
π Falcon-7b-chat-oasst1
Falcon-7b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning Falcon-7B on the OpenAssistant/oasst1 dataset. This repo only includes the LoRA adapters from fine-tuning with π€'s peft package.
Model Summary
- Model Type: Causal decoder-only
- Language(s): English
- Base Model: Falcon-7B (License: Apache 2.0)
- Dataset: OpenAssistant/oasst1 (License: Apache 2.0)
- License(s): Apache 2.0 inherited from "Base Model" and "Dataset"
Model Details
The model was fine-tuned in 8-bit precision using π€ peft
adapters, transformers
, and bitsandbytes
. Training relied on a method called "Low Rank Adapters" (LoRA), specifically the QLoRA variant.
Model Date
July 3, 2023
Quick Start
To prompt the chat model, use the following format:
<human>: [Instruction]
<bot>:
Example Dialogue 1
Prompter:
"""<human>: A law suit has been filed against me by a Fodd Giant, Nestle, on reviewing their product 'Maggie' on YouTube.
You need to act as my lawyer. Write a letter to District Judge defending my case.
<bot>:"""
Falcon-7b-chat-oasst1:
Dear District Judge,
I am writing to you in response to the law suit filed by the Food Giant, Nestle, against me on reviewing their product 'Maggie' on YouTube.
I would like to state that I have not violated any laws or regulations by reviewing the product on my YouTube channel. I have followed all the guidelines and regulations set by the Food and Drug Administration (FDA) and the Federal Trade Commission (FTC) when reviewing products.
I believe that my review of the product was fair and accurate, and I have not made any false or misleading statements about the product.