|
--- |
|
language: |
|
- en |
|
license: llama2 |
|
library_name: peft |
|
datasets: |
|
- TuningAI/Cover_letter_v2 |
|
pipeline_tag: text-generation |
|
base_model: meta-llama/Llama-2-7b-hf |
|
--- |
|
## Model Name: **Llama2_7B_Cover_letter_generator** |
|
## Description: |
|
**Llama2_7B_Cover_letter_generator** is a powerful, custom language model that has been meticulously fine-tuned to excel at generating cover letters for various job positions. |
|
It serves as an invaluable tool for automating the creation of personalized cover letters, tailored to specific job descriptions. |
|
## Base Model: |
|
This model is based on the Meta's **meta-llama/Llama-2-7b-hf** architecture, |
|
making it a highly capable foundation for generating human-like text responses. |
|
|
|
## Dataset : |
|
This model was fine-tuned on a custom dataset meticulously curated with more than 200 unique examples. |
|
The dataset incorporates both manual entries and contributions from GPT3.5, GPT4, and Falcon 180B models. |
|
|
|
## Fine-tuning Techniques: |
|
Fine-tuning was performed using QLoRA (Quantized LoRA), an extension of LoRA that introduces quantization for enhanced parameter efficiency. |
|
The model benefits from 4-bit NormalFloat (NF4) quantization and Double Quantization techniques, ensuring optimized performance. |
|
|
|
## Use Cases: |
|
|
|
* **Automating Cover Letter Creation:** Llama2_7B_Cover_letter_generator can be used to rapidly generate cover letters for a wide range of job openings, saving time and effort for job seekers. |
|
|
|
## Performance: |
|
|
|
* Llama2_7B_Cover_letter_generator exhibits impressive performance in generating context-aware cover letters with high coherence and relevance to job descriptions. |
|
* It maintains a low perplexity score, indicating its ability to generate text that aligns well with user input and desired contexts. |
|
* The model's quantization techniques enhance its efficiency without significantly compromising performance. |
|
|
|
## Limitations: |
|
|
|
While the model excels in generating cover letters, it may occasionally produce text that requires minor post-processing for perfection. |
|
+ It may not fully capture highly specific or niche job requirements, and some manual customization might be necessary for certain applications. |
|
+ Llama2_7B_Cover_letter_generator's performance may vary depending on the complexity and uniqueness of the input prompts. |
|
+ Users should be mindful of potential biases in the generated content and perform appropriate reviews to ensure inclusivity and fairness. |
|
|
|
## Training procedure |
|
|
|
|
|
The following `bitsandbytes` quantization config was used during training: |
|
- load_in_8bit: False |
|
- load_in_4bit: True |
|
- llm_int8_threshold: 6.0 |
|
- llm_int8_skip_modules: None |
|
- llm_int8_enable_fp32_cpu_offload: False |
|
- llm_int8_has_fp16_weight: False |
|
- bnb_4bit_quant_type: nf4 |
|
- bnb_4bit_use_double_quant: False |
|
- bnb_4bit_compute_dtype: float16 |
|
### Framework versions |
|
|
|
|
|
- PEFT 0.4.0 |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
``` |
|
! huggingface-cli login |
|
``` |
|
|
|
```python |
|
from transformers import pipeline |
|
from transformers import AutoTokenizer |
|
from peft import PeftModel, PeftConfig |
|
from transformers import AutoModelForCausalLM , BitsAndBytesConfig |
|
import torch |
|
|
|
#config = PeftConfig.from_pretrained("ayoubkirouane/Llama2_13B_startup_hf") |
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=getattr(torch, "float16"), |
|
bnb_4bit_use_double_quant=False) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"meta-llama/Llama-2-7b-hf", |
|
quantization_config=bnb_config, |
|
device_map={"": 0}) |
|
model.config.use_cache = False |
|
model.config.pretraining_tp = 1 |
|
model = PeftModel.from_pretrained(model, "TuningAI/Llama2_7B_Cover_letter_generator") |
|
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf" , trust_remote_code=True) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
tokenizer.padding_side = "right" |
|
Instruction = "Given a user's information about the target job, you will generate a Cover letter for this job based on this information." |
|
while 1: |
|
input_text = input(">>>") |
|
logging.set_verbosity(logging.CRITICAL) |
|
prompt = f"### Instruction\n{Instruction}.\n ###Input \n\n{input_text}. ### Output:" |
|
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer,max_length=400) |
|
result = pipe(prompt) |
|
print(result[0]['generated_text'].replace(prompt, '')) |
|
``` |