ayoubkirouane commited on
Commit
156d629
1 Parent(s): 453da8a

Upload model

Browse files
Files changed (3) hide show
  1. README.md +12 -76
  2. adapter_config.json +21 -0
  3. adapter_model.bin +3 -0
README.md CHANGED
@@ -1,50 +1,20 @@
1
  ---
2
  library_name: peft
3
- license: llama2
4
- language:
5
- - en
6
- pipeline_tag: conversational
7
- tags:
8
- - legal
9
- datasets:
10
- - TuningAI/Startup_V1
11
  ---
12
-
13
- ## Model Name: **Llama2_13B_startup_Assistant**
14
-
15
- ## Description:
16
- Llama2_13B_startup_Assistant is a highly specialized language model fine-tuned from Meta's Llama2_13B.
17
-
18
- It has been tailored to assist with inquiries related to Algerian tax law and Algerian startups,
19
- offering valuable insights and guidance in these domains.
20
-
21
- ## Training Data:
22
- This model was fine-tuned on a custom dataset meticulously curated with more than 200 unique examples.
23
- The dataset incorporates both manual entries and contributions from GPT3.5, GPT4, and Falcon 180B models.
24
- ## Fine-tuning Techniques:
25
- Fine-tuning was performed using QLoRA (Quantized LoRA), an extension of LoRA that introduces quantization for enhanced parameter efficiency.
26
- The model benefits from 4-bit NormalFloat (NF4) quantization and Double Quantization techniques, ensuring optimized performance.
27
-
28
- ## Use Cases:
29
-
30
- + Providing guidance and information related to Algerian tax laws.
31
- + Offering insights and advice on matters concerning Algerian startups.
32
- + Facilitating discussions and answering questions on specific topics within these domains.
33
-
34
- ## Performance:
35
-
36
- Llama2_13B_startup_Assistant exhibits improved performance and efficiency in addressing queries related to Algerian tax law and startups,
37
- making it a valuable resource for individuals and businesses navigating these areas.
38
-
39
- ## Limitations:
40
-
41
- * While highly specialized, this model may not cover every nuanced aspect of Algerian tax law or the startup ecosystem.
42
- * Accuracy may vary depending on the complexity and specificity of questions.
43
- * It may not provide legal advice, and users should seek professional consultation for critical legal matters.
44
-
45
  ## Training procedure
46
 
47
 
 
 
 
 
 
 
 
 
 
 
 
48
  The following `bitsandbytes` quantization config was used during training:
49
  - load_in_8bit: False
50
  - load_in_4bit: True
@@ -59,38 +29,4 @@ The following `bitsandbytes` quantization config was used during training:
59
 
60
  - PEFT 0.4.0
61
 
62
- ### How to Get Started with the Model
63
- ```
64
- ! huggingface-cli login
65
- ```
66
-
67
- ```python
68
- from transformers import pipeline
69
- from transformers import AutoTokenizer
70
- from peft import PeftModel, PeftConfig
71
- from transformers import AutoModelForCausalLM , BitsAndBytesConfig
72
- import torch
73
- bnb_config = BitsAndBytesConfig(
74
- load_in_4bit=True,
75
- bnb_4bit_quant_type="nf4",
76
- bnb_4bit_compute_dtype=getattr(torch, "float16"),
77
- bnb_4bit_use_double_quant=False)
78
- model = AutoModelForCausalLM.from_pretrained(
79
- "meta-llama/Llama-2-13b-chat-hf",
80
- quantization_config=bnb_config,
81
- device_map={"": 0})
82
- model.config.use_cache = False
83
- model.config.pretraining_tp = 1
84
- model = PeftModel.from_pretrained(model, "TuningAI/Llama2_13B_startup_Assistant")
85
-
86
- tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf", trust_remote_code=True)
87
- tokenizer.pad_token = tokenizer.eos_token
88
- tokenizer.padding_side = "right"
89
- system_message = "Given a user's startup-related question in English, you will generate a thoughtful answer in English."
90
- while 1:
91
- input_text = input(">>>")
92
- prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n {input_text}. [/INST]"
93
- pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=400)
94
- result = pipe(prompt)
95
- print(result[0]['generated_text'].replace(prompt, ''))
96
- ```
 
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - load_in_8bit: False
9
+ - load_in_4bit: True
10
+ - llm_int8_threshold: 6.0
11
+ - llm_int8_skip_modules: None
12
+ - llm_int8_enable_fp32_cpu_offload: False
13
+ - llm_int8_has_fp16_weight: False
14
+ - bnb_4bit_quant_type: nf4
15
+ - bnb_4bit_use_double_quant: False
16
+ - bnb_4bit_compute_dtype: float16
17
+
18
  The following `bitsandbytes` quantization config was used during training:
19
  - load_in_8bit: False
20
  - load_in_4bit: True
 
29
 
30
  - PEFT 0.4.0
31
 
32
+ - PEFT 0.4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
adapter_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_mapping": null,
3
+ "base_model_name_or_path": "meta-llama/Llama-2-13b-chat-hf",
4
+ "bias": "none",
5
+ "fan_in_fan_out": false,
6
+ "inference_mode": true,
7
+ "init_lora_weights": true,
8
+ "layers_pattern": null,
9
+ "layers_to_transform": null,
10
+ "lora_alpha": 16,
11
+ "lora_dropout": 0.1,
12
+ "modules_to_save": null,
13
+ "peft_type": "LORA",
14
+ "r": 64,
15
+ "revision": null,
16
+ "target_modules": [
17
+ "q_proj",
18
+ "v_proj"
19
+ ],
20
+ "task_type": "CAUSAL_LM"
21
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e22694a34c85452918e566665e4dfad4340a4970f32157c167df5eb928b05748
3
+ size 209772877