Falcon 7B Tatts Merged Model
Model Description
This model is a merged version of Falcon 7B with LoRA weights, designed for causal language modeling and text generation tasks.
Training procedure
The following bitsandbytes quantization config was used during training:
load_in_8bit
: Falseload_in_4bit
: Truellm_int8_threshold
: 6.0llm_int8_skip_modules
: Nonellm_int8_enable_fp32_cpu_offload
: Falsellm_int8_has_fp16_weight
: Falsebnb_4bit_quant_type
: nf4bnb_4bit_use_double_quant
: Falsebnb_4bit_compute_dtype
: float16
Framework versions
- PEFT 0.4.0
- Downloads last month
- 5