davidkim205 commited on
Commit
7df61eb
1 Parent(s): 4320b80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ This study addresses these challenges by introducing a multi-task instruction te
24
  * **Model Developers** : davidkim(changyeon kim)
25
  * **Repository** : https://github.com/davidkim205/komt
26
  * **Lora target modules** : q_proj, o_proj, v_proj, gate_proj, down_proj, k_proj, up_proj
27
- * **Model Size** : 80MB
28
  * **Model Architecture** : komt-llama-2-30b is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning by multi-task instruction
29
  ## Dataset
30
  korean multi-task instruction dataset
 
24
  * **Model Developers** : davidkim(changyeon kim)
25
  * **Repository** : https://github.com/davidkim205/komt
26
  * **Lora target modules** : q_proj, o_proj, v_proj, gate_proj, down_proj, k_proj, up_proj
27
+ * **Model Size** : 244MB
28
  * **Model Architecture** : komt-llama-2-30b is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning by multi-task instruction
29
  ## Dataset
30
  korean multi-task instruction dataset