kobkrit commited on
Commit
d55900b
1 Parent(s): 61ea1c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -13,7 +13,7 @@ datasets:
13
  language:
14
  - th
15
  - en
16
- library_name: adapter-transformers
17
  pipeline_tag: text-generation
18
  tags:
19
  - openthaigpt
@@ -26,10 +26,12 @@ https://openthaigpt.aieat.or.th/" width="200px">
26
 
27
  OpenThaiGPT Version 1.0.0-alpha is the first Thai implementation of a 7B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions below and makes use of the Huggingface LLaMA implementation.
28
 
 
 
29
  ## Upgrade from OpenThaiGPT 0.1.0-beta
30
  - Using Facebook LLama v2 model 7b chat as a base model which is pretrained on over 2 trillion token.
31
  - Context Length is upgrade from 2048 token to 4096 token
32
- - Allow research and commerical use.
33
 
34
  ## Pretrain Model
35
  - [https://huggingface.co/meta-llama/Llama-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat)
@@ -49,7 +51,10 @@ OpenThaiGPT Version 1.0.0-alpha is the first Thai implementation of a 7B-paramet
49
  **Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br>
50
  **Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br>
51
  **Weight (Lora Adapter)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat<br>
52
- **Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ckpt-hf
 
 
 
53
 
54
  ## Sponsors
55
  Pantip.com, ThaiSC<br>
 
13
  language:
14
  - th
15
  - en
16
+ library_name: transformers
17
  pipeline_tag: text-generation
18
  tags:
19
  - openthaigpt
 
26
 
27
  OpenThaiGPT Version 1.0.0-alpha is the first Thai implementation of a 7B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions below and makes use of the Huggingface LLaMA implementation.
28
 
29
+ ** Full Huggingface Checkpoint Model **
30
+
31
  ## Upgrade from OpenThaiGPT 0.1.0-beta
32
  - Using Facebook LLama v2 model 7b chat as a base model which is pretrained on over 2 trillion token.
33
  - Context Length is upgrade from 2048 token to 4096 token
34
+ - Allow research and commerical use.a
35
 
36
  ## Pretrain Model
37
  - [https://huggingface.co/meta-llama/Llama-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat)
 
51
  **Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br>
52
  **Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br>
53
  **Weight (Lora Adapter)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat<br>
54
+ **Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ckpt-hf<br>
55
+ **Weight (GGML)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ggml<br>
56
+ **Weight (Quantized 4bit GGML)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ggml-q4
57
+
58
 
59
  ## Sponsors
60
  Pantip.com, ThaiSC<br>