Xenova HF staff hl-tburns commited on
Commit
df32c35
1 Parent(s): 81481ec

Update code example in readme to use `device` variable instead of hard coded `cuda` (#9)

Browse files

- Update code example in readme to use `device` variable instead of hard coded `cuda` (a62f67432d07f2f12288a96aefd3605791a8c59e)


Co-authored-by: Tanner Burns <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -40,7 +40,7 @@ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
40
  messages = [{"role": "user", "content": "List the steps to bake a chocolate cake from scratch."}]
41
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
42
  print(input_text)
43
- inputs = tokenizer.encode(input_text, return_tensors="pt").to("cuda")
44
  outputs = model.generate(inputs, max_new_tokens=100, temperature=0.6, top_p=0.92, do_sample=True)
45
  print(tokenizer.decode(outputs[0]))
46
  ```
 
40
  messages = [{"role": "user", "content": "List the steps to bake a chocolate cake from scratch."}]
41
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
42
  print(input_text)
43
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
44
  outputs = model.generate(inputs, max_new_tokens=100, temperature=0.6, top_p=0.92, do_sample=True)
45
  print(tokenizer.decode(outputs[0]))
46
  ```