01GangaPutraBheeshma
commited on
Commit
•
76e485d
1
Parent(s):
d6806d6
Update README.md
Browse files
README.md
CHANGED
@@ -30,6 +30,7 @@ Here's a brief description of my project.
|
|
30 |
colab_code_generator_FT_code_gen_UT, an instruction-following large language model trained on the Google Colab Pro with T4 GPU and fine-tuned on 'Salesforce/codegen-350M-mono' that is licensed for commercial use. Code Generator_UT is trained on ~19k instructions/response fine-tuning records from 'iamtarun/python_code_instructions_18k_alpaca'.
|
31 |
|
32 |
### Loading the fine-tuned Code Generator
|
33 |
-
|
34 |
-
|
35 |
-
|
|
|
|
30 |
colab_code_generator_FT_code_gen_UT, an instruction-following large language model trained on the Google Colab Pro with T4 GPU and fine-tuned on 'Salesforce/codegen-350M-mono' that is licensed for commercial use. Code Generator_UT is trained on ~19k instructions/response fine-tuning records from 'iamtarun/python_code_instructions_18k_alpaca'.
|
31 |
|
32 |
### Loading the fine-tuned Code Generator
|
33 |
+
```
|
34 |
+
from peft import AutoPeftModelForCausalLM>
|
35 |
+
test_model_UT = AutoPeftModelForCausalLM.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT")
|
36 |
+
test_tokenizer_UT = AutoTokenizer.from_pretrained("01GangaPutraBheeshma/colab_code_generator_FT_code_gen_UT")```
|