library_name: peft
datasets:
- squad_v2
model-index:
- name: dangkhoa99/falcon-7b-finetuned-QA-MRC-4-bit
results: []
language:
- en
tags:
- falcon-7b
- custom_code
- text-generation-inference
- endpoints-template
metrics:
- exact_match
- f1
pipeline_tag: text-generation
inference: false
馃殌 falcon-7b-finetuned-QA-MRC-4-bit
Falcon-7b-finetuned-QA-MRC-4-bit is a model for Machine Reading Comprehension (MRC) with Question Answering (QA). It was built by fine-tuning Falcon-7B on the SQuAD2.0 dataset. This repo only includes the LoRA adapters from fine-tuning with 馃's peft package.
Model Summary
- Model Type: Causal decoder-only
- Language(s): English
- Base Model: Falcon-7B (License: Apache 2.0)
- Dataset: SQuAD2.0 (License: cc-by-sa-4.0)
- License(s): Apache 2.0 inherited from "Base Model" and cc-by-sa-4.0 inherited from "Dataset"
Model Details
The model was fine-tuned in 4-bit precision using 馃 peft
adapters, transformers
, and bitsandbytes
. Training relied on a method called "Low Rank Adapters" (LoRA), specifically the QLoRA variant. The run took approximately 5.08 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory.
Model Date
August 08, 2023
Usage
Prompt
The model was trained on the following kind of prompt:
"""Answer the question based on the context below. If the question cannot be answered using the information provided answer with 'No answer'. Stop response if end.
>>TITLE<<: Flawless answer.
>>CONTEXT<<: {context}
>>QUESTION<<: {question}
>>ANSWER<<:
"""
Inference
You will need at least 6GB of memory to swiftly run inference.
Example 1:
context = '''The Amazon rainforest (Portuguese: Floresta Amaz么nica or Amaz么nia; Spanish: Selva Amaz贸nica, Amazon铆a or usually Amazonia; French: For锚t amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'''
question = '''Which name is also used to describe the Amazon rainforest in English?'''
>>> 'Amazonia or the Amazon Jungle'
Example 2 (No answer):
context = '''The Amazon rainforest (Portuguese: Floresta Amaz么nica or Amaz么nia; Spanish: Selva Amaz贸nica, Amazon铆a or usually Amazonia; French: For锚t amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'''
question = '''What is 2 + 2?'''
>>> 'No answer'
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
Performance
Evaluated on the SQuAD 2.0 dev set with the Metrics
'exact': 71.48993514697212
'f1': 76.65914166347146
'total': 11873
'HasAns_exact': 62.78677462887989
'HasAns_f1': 73.14001163468224
'HasAns_total': 5928
'NoAns_exact': 80.1682085786375
'NoAns_f1': 80.1682085786375
'NoAns_total': 5945
'best_exact': 71.48993514697212
'best_exact_thresh': 0.0
'best_f1': 76.65914166347147
'best_f1_thresh': 0.0
Framework versions
- PEFT 0.5.0.dev0
- Transformers 4.31.0
- Datasets 2.14.4
- Tokenizers 0.13.3