dahara1 commited on
Commit
17c39c9
1 Parent(s): f4bee39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -33,8 +33,9 @@ model = AutoGPTQForCausalLM.from_quantized(
33
  use_safetensors=True,
34
  device="cuda:0")
35
 
36
- prompt = "スタジオジブリの作品を5つ教えてください"
37
- prompt_template = f"### 指示: {prompt}\n\n### 応答:"
 
38
 
39
  tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
40
  output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
@@ -51,7 +52,7 @@ Also, the score may change as a result of tuning after this.
51
 
52
  * **Japanese benchmark**
53
 
54
- - *We used [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable) + gptq patch for evaluation.*
55
  - *The 4-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, and JSQuAD-1.1.*
56
  - *model loading is performed with gptq_use_triton=True, and evaluation is performed with template version 0.3 using the few-shot in-context learning.*
57
  - *The number of few-shots is 3,3,3,2.*
 
33
  use_safetensors=True,
34
  device="cuda:0")
35
 
36
+
37
+ prompt_text = "スタジオジブリの作品を5つ教えてください"
38
+ prompt_template = f'以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n\n### 指示:\n{prompt_text}\n\n### 応答:'
39
 
40
  tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
41
  output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
 
52
 
53
  * **Japanese benchmark**
54
 
55
+ - *We used [Stability-AI/lm-evaluation-harness + gptq patch](https://github.com/webbigdata-jp/lm-evaluation-harness) for evaluation.*
56
  - *The 4-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, and JSQuAD-1.1.*
57
  - *model loading is performed with gptq_use_triton=True, and evaluation is performed with template version 0.3 using the few-shot in-context learning.*
58
  - *The number of few-shots is 3,3,3,2.*