Text Classification
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
hendrydong johnowhitaker commited on
Commit
94fad49
1 Parent(s): 883c72b

Update README.md (#4)

Browse files

- Update README.md (5eefeb2345a1f3399427c8faf937d9ad38028c3d)


Co-authored-by: Jonathan Whitaker <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -43,7 +43,7 @@ We use the training script at `https://github.com/WeiXiongUST/RLHF-Reward-Model
43
  {"role": "user", "content": "I'd like to show off how chat templating works!"},
44
  ]
45
 
46
- test_texts = [tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False).replace(tokenizer.bos_token, "")]
47
  pipe_outputs = rm_pipe(test_texts, **pipe_kwargs)
48
  rewards = [output[0]["score"] for output in pipe_outputs]
49
  ```
 
43
  {"role": "user", "content": "I'd like to show off how chat templating works!"},
44
  ]
45
 
46
+ test_texts = [rm_tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False).replace(rm_tokenizer.bos_token, "")]
47
  pipe_outputs = rm_pipe(test_texts, **pipe_kwargs)
48
  rewards = [output[0]["score"] for output in pipe_outputs]
49
  ```